Escolar Documentos
Profissional Documentos
Cultura Documentos
V200R011C02
03
Date
2012-11-30
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or representations
of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website:
http://www.huawei.com
Email:
support@huawei.com
Issue 03 (2012-11-30)
Version
V200R011C02
V200R011C02
V200R011C02
V200R011C02
iManager U2000
V100R005
Intended Audience
This document describes the features and functions of the OptiX OSN 7500II/7500/3500/1500
equipment, in terms of basic concepts, availability, implementing principles, applications,
configuration, and troubleshooting.
After reading this document, you can deeply understand these features and functions. Moreover,
you can configure these features and functions on the U2000, and learn about the methods of
troubleshooting these features and functions on the U2000.
This document is intended for:
l
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Issue 03 (2012-11-30)
ii
Symbol
Description
DANGER
WARNING
CAUTION
TIP
NOTE
GUI Conventions
The GUI conventions that may be found in this document are defined as follows.
Convention
Description
Boldface
>
Change History
Updates between document issues are cumulative. Therefore, the latest document issue contains
all updates made in previous issues.
iii
The description about the OptiX OSN 3500 and OptiX OSN 7500 supporting "ATM
PWE3", "ATM QoS", and "ATM OAM" is added. "Availability" is updated.
"Feature Dependency and Limitation" is added for each feature in "Packet Features".
"Configuring PW FPS Protection Groups" and "Configuration Example (PW FPS)" are
added in "MPLS PW APS".
iv
"MC-LAG" is optimized.
"MS-PW" is optimized.
"ETH-OAM" is optimized.
Issue 03 (2012-11-30)
Contents
Contents
About This Document.....................................................................................................................ii
1 MPLS Basics...................................................................................................................................1
1.1 Introduction........................................................................................................................................................2
1.2 Basic Concepts...................................................................................................................................................2
1.2.1 MPLS Network Architecture.....................................................................................................................3
1.2.2 LSP............................................................................................................................................................3
1.2.3 Bearer Mode for MPLS Packets................................................................................................................5
1.2.4 MPLS Label...............................................................................................................................................6
1.3 Reference Standards and Protocols....................................................................................................................7
1.4 Availability.........................................................................................................................................................8
1.5 Principles............................................................................................................................................................9
2 PWE3..............................................................................................................................................12
2.1 PWE3 Basics....................................................................................................................................................13
2.1.1 Introduction.............................................................................................................................................13
2.1.2 Basic Concepts........................................................................................................................................13
2.1.2.1 PWE3 Network Reference Model..................................................................................................14
2.1.2.2 PWE3 Protocol Reference Model...................................................................................................15
2.1.2.3 PWE3 Encapsulation Format..........................................................................................................17
2.1.2.4 MS-PW...........................................................................................................................................18
2.1.2.5 VCCV.............................................................................................................................................20
2.1.3 Reference Standards and Protocols.........................................................................................................21
2.1.4 Availability..............................................................................................................................................21
2.1.5 Principles.................................................................................................................................................23
2.1.5.1 Packet Forwarding Process of SS-PW............................................................................................23
2.1.5.2 Packet Forwarding Process of MS-PW..........................................................................................24
2.2 ATM PWE3......................................................................................................................................................25
2.2.1 Introduction.............................................................................................................................................25
2.2.2 Basic Concepts........................................................................................................................................26
2.2.2.1 ATM N-to-One Cell Encapsulation................................................................................................26
2.2.2.2 ATM One-to-One Cell Encapsulation............................................................................................28
2.2.2.3 Number of ATM Cells Encapsulated in PWE3 Packets.................................................................30
2.2.2.4 QoS of ATM PWE3.......................................................................................................................31
Issue 03 (2012-11-30)
vi
Contents
2.2.3 Specifications...........................................................................................................................................31
2.2.4 Reference Standards and Protocols.........................................................................................................34
2.2.5 Availability..............................................................................................................................................34
2.2.6 Feature Dependency and Limitation........................................................................................................34
2.2.7 Principles.................................................................................................................................................38
2.2.8 Configuring ATM PWE3........................................................................................................................40
2.2.9 Configuration Example............................................................................................................................40
2.2.10 Relevant Alarms and Performance Events............................................................................................40
2.2.10.1 Relevant Alarms...........................................................................................................................40
2.2.10.2 Relevant Performance Events.......................................................................................................41
2.2.11 Parameter Description...........................................................................................................................41
2.2.11.1 Creating an ATM Service.............................................................................................................41
2.2.11.2 Connection....................................................................................................................................43
2.2.11.3 SDH Ports.....................................................................................................................................46
2.3 ETH PWE3.......................................................................................................................................................49
2.3.1 Introduction.............................................................................................................................................49
2.3.2 Basic Concepts........................................................................................................................................50
2.3.2.1 Format of an ETH PWE3 Packet....................................................................................................50
2.3.2.2 PW Encapsulation Mode................................................................................................................51
2.3.2.3 QoS of ETH PWE3.........................................................................................................................53
2.3.2.4 E-Line Services Carried on PWs....................................................................................................53
2.3.3 Reference Standards and Protocols.........................................................................................................56
2.3.4 Availability..............................................................................................................................................56
2.3.5 Feature Dependency and Limitation........................................................................................................58
2.3.6 Principles.................................................................................................................................................59
2.3.7 Configuration Example............................................................................................................................60
2.4 TDM PWE3......................................................................................................................................................60
2.4.1 Introduction.............................................................................................................................................60
2.4.2 Basic Concepts........................................................................................................................................61
2.4.2.1 E1 Frame Format............................................................................................................................61
2.4.2.2 SAToP............................................................................................................................................63
2.4.2.3 CESoPSN........................................................................................................................................66
2.4.2.4 Data Jitter Buffer............................................................................................................................69
2.4.2.5 CES Alarm Transmission...............................................................................................................70
2.4.2.6 Clock Recovery Schemes of TDM PWE3.....................................................................................71
2.4.2.7 QoS of TDM PWE3.......................................................................................................................72
2.4.3 Specifications...........................................................................................................................................73
2.4.4 Reference Standards and Protocols.........................................................................................................74
2.4.5 Availability..............................................................................................................................................74
2.4.6 Feature Dependency and Limitation........................................................................................................75
2.4.7 Principles.................................................................................................................................................78
3 ATM OAM....................................................................................................................................79
Issue 03 (2012-11-30)
vii
Contents
3.1 Introduction......................................................................................................................................................80
3.2 Basic Concepts.................................................................................................................................................80
3.2.1 ATM OAM Levels..................................................................................................................................80
3.2.2 Segment and End Attributes and Directions of CPs................................................................................81
3.2.3 ATM OAM Functions.............................................................................................................................83
3.3 Reference Standards and Protocols..................................................................................................................85
3.4 Availability.......................................................................................................................................................85
3.5 Feature Dependency and Limitation.................................................................................................................86
3.6 Principles..........................................................................................................................................................87
3.6.1 AIS/RDI...................................................................................................................................................87
3.6.2 CC............................................................................................................................................................89
3.6.3 LB............................................................................................................................................................90
3.7 Configuring ATM OAM..................................................................................................................................91
3.7.1 Setting Segment End Attribute................................................................................................................91
3.7.2 Setting the CC Activation Status.............................................................................................................92
3.7.3 Setting Local Loopback ID......................................................................................................................93
3.7.4 Setting a Remote Loopback Test.............................................................................................................94
3.7.5 Configuring ATM Alarm Transmission..................................................................................................95
3.8 Configuration Example.....................................................................................................................................95
3.8.1 Example Description...............................................................................................................................95
3.8.2 Configuration Process..............................................................................................................................97
3.9 Relevant Alarms and Performance Events.......................................................................................................99
3.9.1 Relevant Alarms......................................................................................................................................99
3.9.2 Relevant Performance Events................................................................................................................100
3.10 Parameter Description..................................................................................................................................100
3.10.1 Segment and End Attributes................................................................................................................100
3.10.2 CC Activation Status...........................................................................................................................101
3.10.3 Remote Loopback Test........................................................................................................................103
3.10.4 LLID....................................................................................................................................................105
3.10.5 Insert OAM Cell to ATM....................................................................................................................106
viii
Contents
5 IMA.............................................................................................................................................. 139
5.1 Introduction....................................................................................................................................................141
5.2 Basic Concepts...............................................................................................................................................142
5.2.1 IMA Protocol Reference Model............................................................................................................142
5.2.2 IMA OAM Cells....................................................................................................................................142
5.2.3 IMA Frame Format................................................................................................................................145
5.2.4 IMA Timing...........................................................................................................................................147
5.3 Reference Standards and Protocols................................................................................................................148
5.4 Availability.....................................................................................................................................................148
5.5 Feature Dependency and Limitation...............................................................................................................149
5.6 Principles........................................................................................................................................................151
5.7 Configuration Procedure.................................................................................................................................151
5.8 Configuring IMA............................................................................................................................................152
5.8.1 Binding ATM TRUNKs........................................................................................................................152
5.8.2 Configuring an IMA Group...................................................................................................................154
5.8.3 Configuring ATM Interface Management Attributes............................................................................154
5.8.4 Querying Running Status of an IMA Group.........................................................................................155
5.8.5 Querying Link Running Status of an IMA Group.................................................................................155
5.8.6 Resetting an IMA Group.......................................................................................................................156
5.8.7 Modifying an IMA Group.....................................................................................................................156
5.8.8 Deleting an IMA Group.........................................................................................................................157
5.9 Configuration Example...................................................................................................................................157
5.9.1 Network Diagram..................................................................................................................................157
5.9.2 Service Planning....................................................................................................................................158
5.9.3 Configuration Process............................................................................................................................159
5.10 Relevant Alarms and Performance Events...................................................................................................160
5.10.1 Relevant Alarms..................................................................................................................................160
5.10.2 Relevant Performance Events..............................................................................................................162
5.11 Parameter Description..................................................................................................................................162
5.11.1 Configuring Bound Paths....................................................................................................................162
5.11.2 IMA Group Management....................................................................................................................164
5.11.3 ATM Port Management.......................................................................................................................169
5.11.4 IMA Group Status...............................................................................................................................170
Issue 03 (2012-11-30)
ix
Contents
6 ETH-OAM...................................................................................................................................175
6.1 Introduction to ETH-OAM.............................................................................................................................176
6.2 Ethernet Port OAM.........................................................................................................................................177
6.2.1 Basic Concepts......................................................................................................................................177
6.2.2 Reference Standards and Protocols.......................................................................................................178
6.2.3 Availability............................................................................................................................................178
6.2.4 Feature Dependency and Limitation......................................................................................................180
6.2.5 Principle Description.............................................................................................................................180
6.2.5.1 OAM Auto-Discovery..................................................................................................................180
6.2.5.2 Link Performance Monitoring......................................................................................................183
6.2.5.3 Remote Fault Detection................................................................................................................185
6.2.5.4 Remote Loopback.........................................................................................................................186
6.2.5.5 Selfloop Test.................................................................................................................................187
6.2.6 Networking and Application.................................................................................................................188
6.2.7 Configuring Ethernet Port OAM...........................................................................................................189
6.2.7.1 Setting the MAC Address for a Board..........................................................................................190
6.2.7.2 Enabling the Auto-Discovery Function of Ethernet Port OAM...................................................191
6.2.7.3 Configuring the Remote Alarm Support for Link Event..............................................................192
6.2.7.4 Setting OAM Error Frame Monitoring Parameters......................................................................193
6.2.7.5 Configuring the Remote Loopback..............................................................................................193
6.2.7.6 Enabling Self-Loop Detection......................................................................................................194
6.2.8 Relevant Alarms and Performance Events............................................................................................195
6.2.8.1 Relevant Alarms...........................................................................................................................195
6.2.8.2 Relevant Performance Events.......................................................................................................196
6.2.9 Parameter Description: Ethernet Port OAM..........................................................................................196
6.2.9.1 OAM Parameters..........................................................................................................................196
6.2.9.2 Remote OAM Parameters.............................................................................................................197
6.2.9.3 OAM Error Frame Monitoring.....................................................................................................198
6.3 Ethernet Service OAM...................................................................................................................................200
6.3.1 Basic Concepts......................................................................................................................................200
6.3.2 Reference Standards and Protocols.......................................................................................................203
6.3.3 Availability............................................................................................................................................203
6.3.4 Feature Dependency and Limitation......................................................................................................205
6.3.5 Principle Description.............................................................................................................................208
6.3.5.1 Continuity Check..........................................................................................................................209
6.3.5.2 Loopback Test..............................................................................................................................210
6.3.5.3 Link Trace Test.............................................................................................................................210
6.3.5.4 Performance Detection.................................................................................................................212
6.3.6 Networking and Application.................................................................................................................215
6.3.7 Configuring Ethernet Service OAM......................................................................................................217
6.3.7.1 Creating an MD............................................................................................................................217
Issue 03 (2012-11-30)
Contents
xi
Contents
8 HQoS............................................................................................................................................266
8.1 Introduction....................................................................................................................................................268
8.2 Basic Concepts...............................................................................................................................................268
8.2.1 QoS Requirements.................................................................................................................................268
8.2.2 DiffServ.................................................................................................................................................269
8.2.3 Traffic Classification.............................................................................................................................273
8.2.4 Queue Scheduling..................................................................................................................................273
8.2.5 Committed Access Rate.........................................................................................................................275
8.2.6 Congestion Management.......................................................................................................................276
8.2.7 QoS Policy.............................................................................................................................................276
8.3 Specifications..................................................................................................................................................277
8.4 Reference Standards and Protocols................................................................................................................279
8.5 Availability.....................................................................................................................................................279
8.6 Feature Dependency and Limitation...............................................................................................................281
8.7 Principles........................................................................................................................................................285
8.7.1 Traffic Policing......................................................................................................................................285
8.7.2 QoS Model.............................................................................................................................................288
8.8 Networking and Application..........................................................................................................................289
8.9 Configuring the HQoS....................................................................................................................................290
8.9.1 HQoS Configuration Flow.....................................................................................................................290
8.9.2 Creating the DiffServ Domain...............................................................................................................293
8.9.3 Creating the Port WRED Policy (OptiX OSN 1500)............................................................................294
8.9.4 Creating a Service WRED Policy..........................................................................................................294
8.9.5 Creating the WFQ Scheduling Policy...................................................................................................295
8.9.6 Creating the Port Policy (OptiX OSN 3500/7500/7500 II)...................................................................296
8.9.7 Creating the Port Policy (OptiX OSN 1500).........................................................................................297
8.9.8 Creating the V-UNI Ingress Policy (OptiX OSN 3500/7500/7500 II)..................................................298
8.9.9 Creating the V-UNI Ingress Policy (OptiX OSN 1500)........................................................................299
8.9.10 Creating a V-UNI Egress Policy.........................................................................................................300
8.9.11 Creating the PW Policy (OptiX OSN 3500/7500/7500 II)..................................................................301
8.9.12 Creating the QinQ Policy....................................................................................................................303
8.9.13 Creating the CAR Policy (OptiX OSN 1500).....................................................................................304
8.10 Configuration Example.................................................................................................................................304
8.10.1 Description of the Example.................................................................................................................304
Issue 03 (2012-11-30)
xii
Contents
9 IGMP Snooping.........................................................................................................................341
9.1 Introduction to IGMP Snooping.....................................................................................................................343
9.2 Basic Concepts...............................................................................................................................................344
9.2.1 Multicast Protocol..................................................................................................................................344
9.2.2 IGMP.....................................................................................................................................................344
9.2.3 IGMP Snooping.....................................................................................................................................345
9.3 Reference Standards and Protocols................................................................................................................346
9.4 Availability.....................................................................................................................................................347
9.5 Feature Dependency and Limitation...............................................................................................................348
9.6 Principle Description......................................................................................................................................349
9.7 Networking and Application..........................................................................................................................350
9.8 Configuring the IGMP Snooping...................................................................................................................351
9.8.1 Setting the IGMP Snooping Parameters................................................................................................351
9.8.2 Configuring the Route Management.....................................................................................................352
9.8.3 Configuring the Route Member Port Management...............................................................................353
9.8.4 Configuring the Packet Statistics...........................................................................................................353
9.9 Configuration Example...................................................................................................................................354
9.9.1 Network Diagram..................................................................................................................................355
9.9.2 Service Planning....................................................................................................................................355
9.9.3 Configuration Process............................................................................................................................357
9.10 Relevant Alarms and Performance Events...................................................................................................358
9.11 Parameter Description: IGMP Snooping......................................................................................................358
9.11.1 Protocol Configuration........................................................................................................................358
9.11.2 Router Management............................................................................................................................359
9.11.3 Route Member Port Management........................................................................................................360
9.11.4 Packet Statistics...................................................................................................................................361
10 Link Aggregation.....................................................................................................................363
10.1 Introduction..................................................................................................................................................365
Issue 03 (2012-11-30)
xiii
Contents
11 LPT..............................................................................................................................................407
11.1 Introduction..................................................................................................................................................409
11.2 Basic Concepts.............................................................................................................................................409
11.3 Specifications................................................................................................................................................410
11.4 Reference Standards and Protocols..............................................................................................................411
Issue 03 (2012-11-30)
xiv
Contents
11.5 Availability...................................................................................................................................................411
11.6 Feature Dependency and Limitation.............................................................................................................412
11.7 Principles......................................................................................................................................................413
11.7.1 LPT State Transition............................................................................................................................414
11.7.2 Point-to-Point LPT Switching.............................................................................................................415
11.7.3 Point-to-Multipoint LPT Switching.....................................................................................................417
11.8 Configuring LPT...........................................................................................................................................422
11.8.1 Configuration Procedure......................................................................................................................422
11.8.2 Configuring Point-to-Point LPT..........................................................................................................425
11.8.3 Configuring Point-to-Multipoint LPT.................................................................................................425
11.9 Maintaining LPT Configurations..................................................................................................................427
11.9.1 Querying Point-to-Point LPT Configurations.....................................................................................427
11.9.2 Changing Point-to-Point LPT Attributes.............................................................................................427
11.9.3 Deleting Point-to-Point LPT Configurations.......................................................................................428
11.9.4 Querying Point-to-Multipoint LPT Configurations.............................................................................428
11.9.5 Changing Primary and Secondary Points for Point-to-Multipoint LPT..............................................429
11.9.6 Changing Point-to-Multipoint LPT Attributes....................................................................................430
11.9.7 Deleting Point-to-Multipoint LPT Configurations..............................................................................431
11.10 Configuration Example (LPT Based on an E-Line Service Exclusively Occupying a UNI Port).............431
11.10.1 Example Description.........................................................................................................................432
11.10.2 Configuration Process........................................................................................................................433
11.11 Configuration Example (LPT Based on Ethernet Services Sharing a UNI Port).......................................433
11.11.1 Example Description.........................................................................................................................434
11.11.2 Configuration Process........................................................................................................................435
11.12 Verifying LPT.............................................................................................................................................436
11.13 Troubleshooting..........................................................................................................................................438
11.14 Relevant Alarms and Performance Events.................................................................................................439
11.14.1 Relevant Alarms................................................................................................................................439
11.14.2 Relevant Performance Events............................................................................................................439
11.15 Relevant Parameters...................................................................................................................................440
11.15.1 LPT Management_Point-to-Point LPT.............................................................................................440
11.15.2 LPT Management_Point-to-Multipoint LPT.....................................................................................444
12 MC-LAG....................................................................................................................................446
12.1 Introduction..................................................................................................................................................448
12.2 Basic Concepts.............................................................................................................................................449
12.3 Specifications................................................................................................................................................450
12.4 Reference Standards and Protocols..............................................................................................................450
12.5 Availability...................................................................................................................................................451
12.6 Feature Dependency and Limitation.............................................................................................................452
12.7 Principles......................................................................................................................................................453
12.8 Configuring MC-LAG..................................................................................................................................456
12.8.1 Configuring Multi-chassis Synchronous Communication Tunnel......................................................456
Issue 03 (2012-11-30)
xv
Contents
13 MPLS PW APS.........................................................................................................................468
13.1 Introduction..................................................................................................................................................470
13.2 Basic Concepts.............................................................................................................................................472
13.3 Specifications................................................................................................................................................473
13.4 Reference Standards and Protocols..............................................................................................................475
13.5 Availability...................................................................................................................................................475
13.6 Feature Dependency and Limitation.............................................................................................................477
13.7 Principles......................................................................................................................................................479
13.7.1 State Model..........................................................................................................................................479
13.7.2 PW APS 1+1 Protection......................................................................................................................480
13.7.3 PW APS 1:1 Protection.......................................................................................................................481
13.7.4 PW FPS Protection..............................................................................................................................482
13.7.5 Switching Condition............................................................................................................................484
13.7.6 Switching Impact.................................................................................................................................486
13.8 Configuring PW APS...................................................................................................................................487
13.8.1 Configuring PW APS Protection Groups............................................................................................487
13.8.2 Configuring Slave Protection Pairs of PW APS..................................................................................492
13.8.3 Starting the APS Protocol....................................................................................................................495
13.8.4 Performing External Switching of PW APS........................................................................................495
13.8.5 Configuring PW FPS Protection Groups.............................................................................................496
13.9 Configuration Example (PW APS)...............................................................................................................500
13.9.1 Description of the Example.................................................................................................................500
13.9.2 PW APS Configuration Process..........................................................................................................508
13.9.3 PW APS Slave Protection Pair Configuration Process.......................................................................512
13.9.4 Verifying PW APS..............................................................................................................................516
13.10 Configuration Example (PW FPS).............................................................................................................517
13.10.1 Description of the Example...............................................................................................................517
13.10.2 PW FPS Configuration Process.........................................................................................................523
13.10.3 Verifying PW FPS.............................................................................................................................528
13.11 Relevant Alarms and Performance Events.................................................................................................529
13.11.1 Relevant Alarms................................................................................................................................529
13.11.2 Relevant Performance Events............................................................................................................530
Issue 03 (2012-11-30)
xvi
Contents
15 MS-PW.......................................................................................................................................565
15.1 Introduction..................................................................................................................................................567
15.2 Basic Concepts.............................................................................................................................................568
15.3 Reference Standards and Protocols..............................................................................................................569
15.4 Availability...................................................................................................................................................569
15.5 Feature Dependency and Limitation.............................................................................................................570
15.6 Principles......................................................................................................................................................571
15.7 Configuring MS-PWs...................................................................................................................................572
15.8 Configuration Example.................................................................................................................................574
15.8.1 Description of the Example.................................................................................................................574
15.8.2 Configuration Process..........................................................................................................................576
15.9 Verifying MS-PW.........................................................................................................................................578
15.10 Relevant Alarms and Performance Events.................................................................................................581
Issue 03 (2012-11-30)
xvii
Contents
19 RMON........................................................................................................................................631
Issue 03 (2012-11-30)
xviii
Contents
20 Clock Solution..........................................................................................................................656
20.1 SDH Clock Synchronization........................................................................................................................657
20.1.1 Clock....................................................................................................................................................657
20.1.2 Basic Concepts....................................................................................................................................657
20.1.2.1 Clock Synchronization...............................................................................................................657
20.1.2.2 SSM Protocol and Clock ID.......................................................................................................658
20.1.2.3 Clock Protection.........................................................................................................................660
20.1.2.4 Tributary Retiming.....................................................................................................................665
20.1.3 Reference Standards and Protocols.....................................................................................................668
20.1.4 Availability..........................................................................................................................................668
20.1.5 Principles.............................................................................................................................................669
20.1.6 Configuring Clocks..............................................................................................................................670
20.1.6.1 Guidelines on Clock Configuration............................................................................................670
20.1.6.2 Clock Configuration Process......................................................................................................671
20.1.6.3 Viewing Clock Synchronization Status......................................................................................672
20.1.6.4 Configuring NE Clock Sources..................................................................................................673
20.1.6.5 Configuring the Clock Source Protection...................................................................................674
20.1.6.6 Configuring Switching Conditions for Clock Sources...............................................................675
20.1.6.7 Configuring the Clock Source Reversion...................................................................................675
Issue 03 (2012-11-30)
xix
Contents
xx
Contents
xxi
Contents
21 Outband DCN..........................................................................................................................782
21.1 Introduction to the DCN...............................................................................................................................784
21.1.1 DCN Composition...............................................................................................................................784
21.1.2 Huawei DCN Solution.........................................................................................................................785
21.2 HWECC Solution.........................................................................................................................................786
21.2.1 Solution Overview...............................................................................................................................786
21.2.1.1 Basic Concepts...........................................................................................................................786
21.2.1.2 Networking.................................................................................................................................789
21.2.2 Availability..........................................................................................................................................791
21.2.3 Relation with Other Features...............................................................................................................792
21.2.4 Principles.............................................................................................................................................792
21.2.4.1 Establishing ECC Routes............................................................................................................792
21.2.4.2 Transferring Messages................................................................................................................794
21.2.4.3 Extended ECC............................................................................................................................795
Issue 03 (2012-11-30)
xxii
Contents
xxiii
Contents
Issue 03 (2012-11-30)
xxiv
Contents
21.6.7 Example of Hybrid Networking Based on Transparent Transmission of DCC Bytes Through the External
Clock Interface...............................................................................................................................................886
21.7 DCN Maintenance........................................................................................................................................886
21.7.1 Troubleshooting...................................................................................................................................886
21.7.1.1 A Single NE Being Unreachable................................................................................................886
21.7.1.2 All NEs of the Subnet Being Unreachable.................................................................................889
21.7.1.3 NEs Being Unreachable Frequently...........................................................................................893
21.7.2 Maintenance Cases..............................................................................................................................894
21.7.3 Relevant Alarms and Performance Events..........................................................................................895
A List of Parameters.....................................................................................................................927
A.1 Ethernet Port Associated Parameters (Packet Mode)....................................................................................929
Issue 03 (2012-11-30)
xxv
Contents
xxvi
Contents
xxvii
Contents
xxviii
Contents
xxix
Contents
xxx
Contents
xxxi
Contents
xxxii
Contents
xxxiii
Contents
xxxiv
Contents
xxxv
Contents
B Glossary....................................................................................................................................1312
Issue 03 (2012-11-30)
xxxvi
1 MPLS Basics
MPLS Basics
Issue 03 (2012-11-30)
1 MPLS Basics
1.1 Introduction
This section provides the definition of MPLS and describes its purpose.
Definition
Based on IP routes and control protocols, MPLS is a connection-oriented switching technology
for the network layer. MPLS uses short and fixed-length labels at different link layers for packet
encapsulation, and switches labels based on the encapsulated labels.
MPLS has two planes: control plane and forwarding plane. The control plane is connectionless,
featuring powerful and flexible routing functions to meet network requirements for a variety of
new applications. This plane is mainly responsible for label distribution, setup of label
forwarding tables, and setup and removal of label switched paths (LSPs). The forwarding plane
is also called the data plane. It is connection-oriented and supports Layer 2 networks such as
ATM and Ethernet networks. The forwarding plane adds or deletes IP packet labels, and forwards
the packets according to the label forwarding table.
Purpose
In the packet domain, MPLS helps to set up MPLS tunnels to carry PWs that transmit a variety
of services on a PSN in an end-to-end manner. These services include TDM, ATM, and Ethernet
services. Figure 1-1 shows the typical MPLS application in the packet domain. In the figure,
the services between the NodeBs and RNCs are transmitted by PW1 and PW2 carried by the
MPLS tunnel.
Figure 1-1 Typical MPLS application
Ethernet/ATM/TDM
Ethernet/ATM/TDM
PSN
NodeB
RNC
PW1
PW2
NE1
MPLS tunnel
NodeB
Ethernet/ATM/TDM
NE2
RNC
Ethernet/ATM/TDM
Issue 03 (2012-11-30)
1 MPLS Basics
LER
Other
MPLS
network
LER
LSR
MPLS
network
Core LSR
LSR
Other
MPLS
network
LER
LER
Other
MPLS
network
An LSR ID uniquely identifies an LSR on an MPLS network. An LSR ID can be 4-byte long or
16-byte long. A 4-byte LSR ID is based on an IPv4 address, and a 16-byte LSR ID is based on
an IPv6 address.
NOTE
Currently, the Hybrid MSTP equipment supports only LSR IDs based on IPv4 addresses.
1.2.2 LSP
Label switched paths (LSPs), also called MPLS tunnels, are classified into various types
depending on different classification criteria.
1 MPLS Basics
An LSP is unidirectional. As shown in Figure 1-3, LSRs on an LSP can be classified into the
following types:
l
Ingress
An LSP ingress node pushes a label onto the packet for MPLS packet encapsulation and
forwarding. One LSP has only one ingress node.
Transit
An LSP transit node swaps labels and forwards MPLS packets according to the label
forwarding table. One LSP may have one or more transits nodes.
Egress
An LSP egress node pops the label and recovers the packet for forwarding. One LSP has
only one egress node.
MPLS
network
Other
MPLS
network
Ingress
Transit
Transit
Egress
Other
MPLS
network
LSP
LSP Types
LSPs are classified into various types depending on different classification criteria. For details,
see Table 1-1.
Table 1-1 LSP types
Issue 03 (2012-11-30)
Aspect
LSP Type
Definition
Support
Capability
Setup mode
Static tunnel
Dynamic tunnel
It refers to an LSP
established by
running the label
distribution protocol
(LDP).
1 MPLS Basics
Aspect
LSP Type
Definition
Support
Capability
Direction
Unidirectional
tunnel
It refers to an LSP in
a specific direction.
Bidirectional tunnel
It refers to a pair of
LSPs that have the
same path but reverse
directions.
E-LSP
L-LSP
Uniform
Pipe
DiffServ
identification mode
LSP mode
Destination
address
Issue 03 (2012-11-30)
Source
address
MPLS packet
FCS
(CRC-32)
1 MPLS Basics
Destination address: It is the MAC address of the opposite interface, learned through the
Address Resolution Protocol (ARP). It changes at each hop.
Source address: It always takes the MAC address of the system. It changes at each hop.
802.1q header: The Hybrid MSTP equipment determines whether an Ethernet frame at the
egress port carries the 802.1q header based on the TAG attribute of an Ethernet port. If the
TAG attribute is Access, the Ethernet frame will not carry the 802.1q header; if the TAG
attribute is Tag aware, the VLAN ID of the 802.1q header in an MPLS packet is the tunnel
VLAN ID that is set on the NMS; if the tunnel VLAN ID is absent, the VLAN ID of the
802.1q header is the default VLAN ID (that is, 1) at the NNI port transmitting MPLS
packets.
Length/Type: It is always set to 0x8847. After detecting the value, the Hybrid MSTP
equipment considers the packet is the Ethernet frame carrying MPLS packet. The NE will
not detect MPLS packets at the ingress port based on the TAG attribute and VLAN ID of
an LSP.
MPLS packet: It consists of the MPLS label and Layer 3 user packet. For details on its
format, see 1.2.4 MPLS Label.
Frame check sequence (FCS): It is used to verify that the Ethernet frame is correct.
NOTE
The ARP is used to translate the IP address at the network layer (that is, logical address) into the MAC
address at the data link layer (that is, physical address). By default when the TAG attribute of a UNI port
is Tag ware, an ARP packet that is transmitted or received through an NNI port has a VLAN ID that is
the default value of the NNI port. Therefore, the TAG attribute and default VLAN ID of an NNI port must
be the same as those of a peer NNI port.
Both FE and GE ports use Ethernet frames to bear MPLS packets.
Label Format
The Hybrid MSTP equipment uses Ethernet frames to bear MPLS packets. Figure 1-5 shows
the format of the MPLS label.
Figure 1-5 Format of the MPLS label
20
Label
MPLS label
23 24
EXP
31bit
TTL
Layer 3 Payload
MPLS packet
Issue 03 (2012-11-30)
1 MPLS Basics
EXP: Theis 3-bit field is reserved for experimental use. On the Hybrid MSTP equipment,
the EXP is used to identify the priority of an MPLS packet, similar to the VLAN priority
specified in IEEE 802.1q.
S: This 1-bit field identifies the bottom of stack. MPLS supports multiple labels, that is,
label stacking. This bit is set to 1 for the bottom label in the label stack.
Time to Live (TTL): This 8-bit field has the same meaning as the TTL specified for IP
packets.
Label Stack
A label stack refers to an ordered set of labels. MPLS allows a packet to carry multiple labels.
The label next to the Layer 2 header is called the top label or outer label, and the label next to
the IP header is called the bottom label or inner label. Theoretically, an unlimited number of
MPLS labels can be stacked.
Figure 1-6 MPLS label stack
Label stack
Ethernet header
/PPP header
Outer label
Inner label
Layer 3 Payload
The label stack is organized as a Last In, First Out stack. The top label is always processed first.
Label Space
The value range for label distribution is called a label space. Two types of label space are
available:
l
The Hybrid MSTP equipment supports only the per-platform label space with a fixed value of
32768. Values of 0 to 15 are reserved for special use and cannot be assigned to LSPs as service
labels. Ingress labels and egress labels must be unique per NE.
Issue 03 (2012-11-30)
1 MPLS Basics
1.4 Availability
The MPLS function requires the support of the applicable equipment, boards, and software.
Version Support
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Issue 03 (2012-11-30)
Applicable
Board
Applicable Version
Applicable Equipment
N1PEG16
N1PEX1
N1PETF8
R1PEFS8
Q1PEGS2
R1PEGS1
N1PEG8
N1PEX2
N2PEX1
1 MPLS Basics
Applicable
Board
Applicable Version
Applicable Equipment
N1PEFF8
R1PEF4F
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
1.5 Principles
On an MPLS network, LSRs enable the packets with the same characteristics to be transmitted
on one LSP based on a unified forwarding mechanism.
Issue 03 (2012-11-30)
1 MPLS Basics
ILM
ILM
ILM
Push
Swap
Swap
Pop
Label=20
PORT1 PORT2
NE A
Ingress
PORT3
PORT5 PORT6
PORT4
NE B
Transit
LSP
(LSP ID=101)
Label=22
Label=21
NE C
Transit
NE D
Egress
Label operation
The ingress, transit, and egress nodes handle an MPLS packet as follows.
Outgoing
Interface
Next Hop
Outgoing
Label
Operation
101
PORT 1
PORT 2
20
Push
Receives a packet, and finds the LSP ID based on the FEC of the packet.
2.
Finds the NHLFE based on the LSP ID and then obtains the information such as outgoing
interface, next hop, outgoing label, and operation. The label operation for an ingress node
is Push.
3.
Pushes an MPLS label to the packet, and forwards the encapsulated MPLS packet to the
next hop.
Issue 03 (2012-11-30)
LSP ID
Outgoing
Interface
Next Hop
Outgoing
Label
Operation
101
PORT 3
PORT 4
21
Swap
10
1 MPLS Basics
Finds the LSP ID based on the label value of the MPLS packet received at the incoming
interface.
2.
Finds the NHLFE based on the LSP ID and then obtains the information such as outgoing
interface, next hop, outgoing label, and operation. The label operation for a transit node is
Swap.
3.
The outgoing label value of the NHLFE is 21. Thus, NE B replaces the old label value of
20 with a new label value of 21 and then sends the MPLS packet carrying the new label to
the next hop.
NOTE
If the value of the new label is equal to or greater than 16, the label operation is Swap. If the value of the
new label is less than 16, this label is special and needs to be processed according to the specific value of
the label.
Outgoing
Interface
Next Hop
Outgoing
Label
Operation
101
Pop
Finds the LSP ID based on the label value of the MPLS packet received at the incoming
interface.
2.
Finds the NHLFE based on the LSP ID and then determines that the label operation is Pop.
3.
Issue 03 (2012-11-30)
11
2 PWE3
PWE3
Issue 03 (2012-11-30)
12
2 PWE3
2.1.1 Introduction
This section provides the definition of pseudo wire emulation edge-to-edge (PWE3) and
describes its purpose.
Definition
PWE3 is a Layer 2 service bearer technology that emulates the basic behaviors and
characteristics of services such as ATM/IMA, Ethernet, and TDM on a packet switched network
(PSN).
Aided by the PWE3 technology, conventional networks can be connected by a PSN. Therefore,
resource sharing and network scaling can be achieved.
Purpose
PWE3 aims to transmit various services such as ATM, Ethernet, and TDM over a PSN. Figure
2-1 shows the PWE3 application. The Ethernet, ATM, and TDM services between NodeBs and
RNCs are emulated by means of PWE3 on NE1 and NE2, and then are transmitted on the pseudo
wires (PWs) between NE1 and NE2.
Figure 2-1 Typical application of PWE3
Ethernet/ATM/TDM
Ethernet/ATM/TDM
PSN
NodeB
RNC
PW1
PW2
NE1
MPLS tunnel
NodeB
Ethernet/ATM/TDM
NE2
RNC
Ethernet/ATM/TDM
Issue 03 (2012-11-30)
13
2 PWE3
Native
service
PE1
Native
service
PE2
PW1
CE1
CE2
PW2
AC
AC
NOTE
In the network reference model, PWs are carried in a PSN tunnel; that is, a single-segment PW (SS-PW).
The concepts found in the network reference model shown in Figure 2-2 are defined as follows.
CE
A CE is a device that originates or terminates a service. The CE is not aware that it is using an
emulated service rather than a native service.
PE
A PE is a device that provides PWE3 to a CE. Located at the edge of a network, a PE is connected
to a CE through an AC.
In the PWE3 network reference model, the mapping relationship between an AC and a PW is
determined once a PW is created between two PEs. As a result, Layer 2 services on CEs can be
transmitted over a PSN.
AC
An AC is a physical or virtual circuit attaching a CE to a PE. An AC can be, for example, an
Ethernet port, a VLAN, or a TDM link.
Issue 03 (2012-11-30)
14
2 PWE3
PW
A PW is a mechanism that carries emulated services from one PE to another PE over a PSN. By
means of PWE3, point-to-point channels are created, separated from each other. Users' Layer 2
packets are transparently transmitted on a PW.
PWs are available in two types depending on whether signaling protocols are used or not.
Specifically, a PW that does use signaling protocols is called a dynamic PW, whereas a PW that
does not use signaling protocols is called a static PW.
NOTE
Tunnel
A tunnel provides a mechanism that transparently transmits information over a network. In a
tunnel, one or more PWs can be carried. A tunnel connects a local PE and a remote PE for
transparently transmitting data.
PSN tunnels are available in several types, but the Hybrid MSTP equipment supports only MPLS
tunnels. In this document, PWE3 is generally based on the MPLS tunnel (LSP), unless otherwise
specified.
NOTE
PW and tunnel are both logical concepts. PW bandwidth is the bandwidth of its carried service, and tunnel
bandwidth is the bandwidth of all its carried PWs.
Issue 03 (2012-11-30)
15
2 PWE3
Forwarder
Native Service
Processing
Service
Interface
(TDM, ATM,
Ethernet,
etc)
To CE
Physical
Pre-processing
Emulated
Service
(TDM, ATM,
Ethernet,
etc)
Emulated Service
Payload
Encapsulation
Pseudo Wire
PW
Demultiplexer
PSN Tunnel,
PSN &
Physical
Headers
PSN Tunnel
Physical
To PSN
In the PWE3 protocol reference model, pre-processing involves the native service processing
layer and forwarder layer, whereas protocol processing involves the encapsulation layer and
demultiplexer layer. The main functions of these layers are described as follows.
Forwarder
A forwarder selects the PW for the service payloads received on an AC. The mapping
relationships can be specified in the service configuration, or implemented through certain types
of dynamically configured information.
PW Demultiplexer Layer
The PW demultiplexer layer enables one or more PWs to be carried in a single PSN tunnel.
Issue 03 (2012-11-30)
16
2 PWE3
20
23 24
31bit
Tunnel label
EXP
TTL
PW label
EX
P
EXP
S
S
TT
L
TTL
Control Word
Laye 2
r PDU
Payload
MPLS label
Control word
Payload
MPLS Label
The MPLS labels include tunnel labels and PW labels, which are used to identify tunnels and
PWs respectively. The format of the tunnel label is the same as that of the PW label. For details,
see 1.2.4 MPLS Label.
Control Word
The 4-byte control word is a header used to carry packet information over an MPLS PSN.
The control word is used to check the packet sequence, to fragment packets, and to restructure
packets. As shown in Table 2-1, the specific format of the control word is determined by the
service type carried by PWE3 and the encapsulation mode adopted.
Table 2-1 Formats of the control word for various services in different encapsulation modes
Issue 03 (2012-11-30)
Service Type
TDM PWE3
SAToP
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
17
2 PWE3
Service Type
ATM PWE3
ETH PWE3
Ethernet encapsulation
Payload
Payload indicates the payload of a service in a PWE3 packet.
2.1.2.4 MS-PW
A PW that is carried in a PSN tunnel is called a single-segment PW (SS-PW). A PW is called a
multi-segment PW (MS-PW) if it is segmented and carried in multiple PSN tunnels.
Native
service
T-PE1
PW1
PSN
tunnel
2
S-PE1
PW3
Native
service
TPE2
CE1
CE2
PW2
PW4
AC
AC
PW switching point
In the preceding network reference model, T-PE1 and T-PE2 provide PWE3 services to CE1
and CE2. The PWs are carried in two PSN tunnels, and constitute the MS-PW.
The two tunnels (PSN tunnel 1 and PSN tunnel 2) that are used to carry PWs reside in different
PSN domains. PSN tunnel 1 extends from T-PE1 to S-PE1, and PSN tunnel 2 extends from SPE1 to T-PE2. Labels of PW1 carried in PSN tunnel 1 and PW3 carried in PSN tunnel 2 are
swapped at S-PE1. Similarly, labels of PW2 carried in PSN tunnel 1 and PW4 carried in PSN
tunnel 2 are swapped at S-PE1.
Issue 03 (2012-11-30)
18
2 PWE3
MS-PW Application
Compared with the SS-PW, the MS-PW has the following characteristics:
l
The following paragraphs and figures compare the application scenarios of the SS-PW and MSPW to show that it is easier for the MS-PW to implement segment-based protection for tunnels.
Figure 2-6 shows the SS-PW networking mode. The services between PE1 and PE2 are
transmitted on PW1 carried in MPLS tunnel 1. Both MPLS tunnel 1 and MPLS tunnel 2 are
configured with 1:1 protection. Protection, however, fails to be provided if disconnection faults
occur on different sides of the operator device (called the P device).
Figure 2-6 SS-PW application
SS-PW
MPLS tunnel 1
PW1
PE1
PW1
PW1
PE2
PW1
MPLS tunnel 2
Packet transmission equipment
Figure 2-7 shows the MS-PW networking mode. The services between T-PE1 and T-PE2 are
transmitted on PW1 carried in MPLS tunnel 1 and PW2 carried on MPLS tunnel 2. The paired
tunnels (MPLS tunnel 1 and MPLS tunnel 3; MPLS tunnel 2 and MPLS tunnel 4) are configured
with 1:1 protection. In this configuration, protection can still be provided even when
disconnection faults occur on different sides of the S-PE1 device.
Figure 2-7 MS-PW application
MS-PW
MPLS tunnel 1
MPLS tunnel 2
PW1
PW2
PW2
PW1
T-PE1
MPLS tunnel 3
S-PE1
MPLS tunnel 4
T-PE2
Issue 03 (2012-11-30)
19
2 PWE3
2.1.2.5 VCCV
As specified in IETF RFC5085, virtual circuit connectivity verification (VCCV) is an end-toend fault detection and diagnostics mechanism for a PW. The VCCV mechanism is, in its
simplest description, a control channel between a PW's ingress and egress points over which
connectivity verification messages can be sent.
The VCCV messages are exchanged between PEs to verify connectivity of PWs. To ensure that
VCCV messages and PW packets traverse the same path, VCCV messages must be encapsulated
in the same manner as PW packets and be transmitted in the same tunnel as the PW packets.
VCCV messages have the following formats.
20
23 24
31bit
Tunnel label
EXP
TTL
PW label
EX
P
EXP
S
S
TT
L
TTL
0001 Version
Reserved
Channel type
Laye
MPLS echor message (IPv4 UDP)
Channel type: The Channel Type is set to 0x0021 for IPv4 payloads and 0x0057 for IPv6
payloads.
Issue 03 (2012-11-30)
20
2 PWE3
20
23 24
31bit
Tunnel label
EXP
TTL
PW label
EX
P
EXP
S
S
TT
L
TTL
EXP
TTL: 1
The main fields in a VCCV message based on OAM alert label are defined as follows:
l
Time to Live (TTL): The value of this field is set to 1, to ensure that the MPLS OAM packet
is not transmitted beyond the sink end of the monitored LSP.
IETF RFC 5085: Pseudowire Virtual Circuit Connectivity Verification (VCCV): A Control
Channel for Pseudowires
2.1.4 Availability
The PWE3 function requires the support of the applicable equipment, boards, and software.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
T2000
21
2 PWE3
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Issue 03 (2012-11-30)
Applicable
Board
Applicable Version
Applicable Equipment
N1PEG16
N1PEX1
N1PETF8
R1PEFS8
Q1PEGS2
R1PEGS1
N1PEG8
N1PEX2
N2PEX1
N1PEFF8
R1PEF4F
N1CQ1
N1MD12
N1MD75
R1ML1
22
2 PWE3
Applicable
Board
Applicable Version
Applicable Equipment
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1AFO1
TNN1D75E
TNN1D12E
TNN1CO1
TNN1EFF8
2.1.5 Principles
The SS-PW and MS-PW use different packet forwarding mechanisms.
PSN
MPLS tunnel
AC
PW1
PE1
CE1
(NodeB)
AC
PW1
PE2
B
CE2
(RNC)
Payload
PW label
A
Tunnel label A
Tunnel label B
Packet transmission equipment
NOTE
The PWs are invisible to the P device on a PSN; the P device provides transparent transport in tunnels.
Issue 03 (2012-11-30)
23
2 PWE3
Extracts the local service packets that are transmitted by CE1 from the AC.
2.
3.
4.
Encapsulates the data transmitted on a PW into PWE3 packets in standard format. The
process involves generation of the control word, and adding of the PW label and tunnel
label (tunnel label A) to the data.
5.
2.
Decapsulates the PW, and removes the tunnel label (tunnel label B), PW label, and control
word.
3.
4.
5.
Selects an AC by using the forwarder, and forwards the packets to CE2 at the remote end
over the AC.
PSN
AC
CE
1
(NodeB)
T-PE1
Payload
PW label A
PW label B
Tunnel label A
Tunnel label B
Tunnel 1
Tunnel 2
PW1
PW2
A
A
S-PE1
B
B
AC
T-PE2
CE
2
(RNC)
Issue 03 (2012-11-30)
24
2 PWE3
The T-PE in the MS-PW networking mode forwards packets in the same manner as PE in the
SS-PW networking mode. In the MS-PW networking mode, S-PE needs to swap the tunnel label
and PW label.
The S-PE device (S-PE1) forwards packets as follows:
When PWE3 packets transmitted from PE1 to PE2 traverse the P device, the tunnel label in the
packets is swapped. That is, tunnel label A is changed to tunnel label B. In addition, the PW
label in the packets is swapped. That is, PW label A is changed to PW label B.
2.2.1 Introduction
This section provides the definition of ATM PWE3 and describes its purpose.
Definition
The ATM PWE3 technology emulates the basic behaviors and characteristics of ATM services
on a packet switched network (PSN) by using the PWE3 mechanism, so that the emulated ATM
services can be transmitted on a PSN.
Purpose
Aided by the ATM PWE3 technology, conventional ATM networks can be connected by a PSN.
Specifically, ATM PWE3 allows transmitting conventional ATM services over a PSN by
emulating the ATM services.
The networking type of ATM PWE3 can be one-to-one, N-to-one or ATM-TRANS depending
on the encapsulation type of ATM PWE3 packets. It is obvious that ATM PWE3 helps to transmit
ATM services over the PSN, without adding ATM equipment or changing the configuration of
the ATM CE equipment.
Figure 2-12 Typical application of ATM PWE3 (in the one-to-one encapsulation mode)
PSN
PW
AC
AC
CE1
PE1
LSP
PE2
CE2
ATM PWE3
1-to-1 ATM
PWE3
service
Issue 03 (2012-11-30)
1-to-1 ATM
PWE3
service
NodeB
RNC
25
2 PWE3
Figure 2-13 Typical application of ATM PWE3 (in the N-to-one encapsulation mode)
CE1
PSN
PW
AC
AC
CE2
PE1
LSP
PE2
CE4
ATM PWE3
N-to-1 ATM
PWE3
service
CE3
N-to-1 ATM
PWE3
service
NodeB
RNC
NOTE
Issue 03 (2012-11-30)
26
2 PWE3
20
0000
23 24
31bit
Tunnel label
EXP
TTL
PW label
EX
P
EXP
S
S
TT
L
TTL
Sequnce number
VPI
VCI
PTI
.
..
VPI
Concatenated
Cells
VCI
PTI
.
..
MPLS label
Control word(Optional)
ATM service payload
Control Word
The meanings of the fields in the control word are as follows:
l
Flags: This field has a length of 4 bits. The 4 bits are not used and are set to 0s.
Rsv: This field has a length of 2 bits. The 2 bits are reserved and are generally set to 0s.
Length: This field has a length of 6 bits. The 6 bits are not used and are set to 0s.
Sequence number: This field has a length of 16 bits. It is optional and used to guarantee
ordered packet delivery. If the 16 bits are set to 0s, the algorithm for checking sequence
numbers is not used.
Issue 03 (2012-11-30)
27
2 PWE3
VPI: The ingress PE copies the VPI field contained in the ATM service payload of the
incoming ATM cell into this field. When the equipment on an MPLS PSN network sets up
a VP, the VPI field contained in the ATM service payload of the incoming ATM cell is not
used.
VCI: The ingress PE copies the VCI field contained in the ATM service payload of the
incoming ATM cell into this field. When the equipment on an MPLS PSN network sets up
a VC, the VCI field contained in the ATM service payload of the incoming ATM cell is
not used.
PTI: This field indicates the bit payload type identifier and has a length of 3 bits. The ingress
PE copies the PTI field contained in the ATM service payload of the incoming ATM cell
into this field.
C: This field indicates the cell loss priority (CLP) and has a length of 1 bit. The C field is
used for congestion control. When the network becomes congested, cells with CLP = 1 are
discarded first. The ingress PE copies the CLP field contained in the ATM service payload
of the incoming ATM cell into this field.
NOTE
An ATM service payload has a length of 52 bytes (that is, a 4-byte ATM cell header and a 48-byte ATM cell
payload), whereas a general-purpose ATM cell has a length of 53 bytes. The 1-byte header error check (HEC)
field found in the ATM NNI cell is not present in the ATM service payload.
Issue 03 (2012-11-30)
28
2 PWE3
0000
20
23 24
31bit
Tunnel label
EXP S
TTL
PW label
EX
S
EXP
S
P
TT
TT
L
L
Rsv
Sequnce number
M V Rsv PTI C
0000
20
23 24
31bit
Tunnel label
EXP S
TTL
PW label
EX
S
EXP
S
P
TT
TT
L
L
Rsv
Sequnce number
M V Rsv PTI C
VCI
ATM Cell Payload (48
byte)
.
.
.
M V Rsv PTI C
VCI
VCI
ATM Cell Payload (48
. byte)
.
.
MPLS label
Control Word
ATM specific
For traditional ATM VPC, the egress PE cannot change the VCI field. For ATM one-to-one cell
encapsulation, the VCI field of the egress PE can be set to a different value from that of the
ingress PE. The VCI field of the egress PE is set to a value that is determined by the PW label.
MPLS Label
MPLS labels include tunnel labels and PW labels, which are used to identify tunnels and PWs
respectively. The format of the tunnel label is the same as that of the PW label. For details, see
1.2.4 MPLS Label.
Control Word
The meanings of the fields in the control word are as follows:
l
0000: This field has a length of 4 bits and they must be set to 0.
Rsv: This field has a length of 4 bits. The 4 bits are reserved and are generally set to 0.
Sequence number: This field has a length of 16 bits. It is optional and used to guarantee
ordered packet delivery. If the 16 bits are set to 0, the algorithm for checking sequence
numbers is not used.
ATM Specific
The meanings of the fields in ATM specific are as follows:
l
M: This field has a length of 1 bit and indicates the transfer mode (that is, whether the
packet is an ATM cell or the payload of a frame). If M = 0, the packet is an ATM cell; if
M = 1, the packet is an ALL5 frame.
V: This field has a length of 1 bit and indicates whether the packet contains the VCI field.
If V = 0, the packet does not contain the VCI field; if VCI = 1, the packet contains the VCI
field.
Issue 03 (2012-11-30)
29
2 PWE3
NOTE
The VCC is an ATM connection mode that is based on the VCI value of the ATM cell header. That is, it
does not use the VCI field ( V = 0).
VPC is an ATM connection mode that is based on the VPI value of the ATM cell header. That is, each
cell contains the VCI field (V = 1).
Rsv: This field has a length of 2 bits. The 2 bits are reserved and are generally set to 0.
PTI: This field indicates the bit payload type identifier and has a length of 3 bits. The ingress
PE copies the PTI field contained in the ATM service payload of the incoming ATM cell
into this field.
C: This field indicates the cell loss priority (CLP) and has a length of 1 bit. The C field is
used for congestion control. When the network becomes congested, cells with CLP = 1 are
discarded first. The ingress PE copies the CLP field contained in the ATM service payload
of the incoming ATM cell into this field.
VCI: The ingress PE copies the VCI field contained in the ATM service payload of the
incoming ATM cell into this field. When the equipment on an MPLS PSN network sets up
a VC, the VCI field contained in the ATM service payload of the incoming ATM cell is
not used.
PTI: This field indicates the bit payload type identifier and has a length of 3 bits. The ingress
PE copies the PTI field contained in the ATM service payload of the incoming ATM cell
into this field.
C: This field indicates the cell loss priority (CLP) and has a length of 1 bit. The C field is
used for congestion control. When the network becomes congested, cells with CLP = 1 are
discarded first. The ingress PE copies the CLP field contained in the ATM service payload
of the incoming ATM cell into this field.
NOTE
An ATM service payload has a length of 52 bytes (that is, a 4-byte ATM cell header and a 48-byte ATM cell
payload), whereas a general-purpose ATM cell has a length of 53 bytes. The 1-byte header error check (HEC)
field found in the ATM NNI cell is not present in the ATM service payload.
When Maximum Number of Concatenated Cells is set to 1, each PWE3 packet contains
only one ATM cell. Specifically, an ATM cell is directly encapsulated into a PWE3 packet
after the PE receives an ATM cell from the AC.
When Maximum Number of Concatenated Cells is set to a value greater than 1, the PE
uses the timer ATM Cell Concatenation Waiting Time. If the PE receives the maximum
number of ATM cells from the AC before the timer expires, the PE encapsulates all the
received ATM cells into a PWE3 packet and resets the timer. If the timer expires, the PE
encapsulates all the received ATM cells into a PWE3 packet and resets the timer, even if
the maximum number of ATM cells is not reached.
Issue 03 (2012-11-30)
30
2 PWE3
CBR
EF
RT-VBR
AF3
NRT-VBR
AF2
UBR+
AF1
UBR
BE
PORT-TRANS
BE
The OptiX OSN equipments perform QoS for ATM PWE3 packets as follows.
l
Ingress node
The PHB service class of an ATM PWE3 packet can be manually specified. When a packet
leaves an ingress node, the EXP value of the packet is determined according to the mapping
(between PHB service classes and EXP values) defined by the DiffServ domain of the
egress port.
Transit node
When a packet enters a transit node, the PHB service class of the packet is determined
according to the mapping (between EXP values and PHB service classes) defined by the
DiffServ domain of the ingress port. When a packet leaves a transit node, the EXP value
of the packet is determined according to the mapping (between PHB service classes and
EXP values) defined by the DiffServ domain of the egress port.
NOTE
When an MPLS tunnel uses a manually specified EXP value, the EXP value of ATM PWE3 packets is fixed,
not affected by a DiffServ domain.
2.2.3 Specifications
This section describes the capacity of the OptiX OSN equipment for supporting ATM PWE3.
Table 2-3 provides the specifications associated with ATM PWE3.
Issue 03 (2012-11-30)
31
2 PWE3
Specifications
Service type
Connection type
l PVP
l PVC
l Transparent
Protection type
l No Protection
l PW APS
l Slave Protection Pair
l TNN1AFO1: 2048
l TNN1D75E and TNN1D12E: 64
OptiX OSN 3500/7500: 64
Issue 03 (2012-11-30)
32
2 PWE3
Item
Specifications
Maximum number of
concatenated ATM cells
1 to 31 (10 by default)
TNN1AFO1: 512
NOTE
When the number of ATM cells is 1, this parameter should be
set to 0.
Issue 03 (2012-11-30)
65535
Number of transparently
transmitted ATM services
supported by boards
16
120
33
2 PWE3
2.2.5 Availability
This section describes the support required by the application of the ATM PWE3 feature.
Version Support
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Applicable Board
Applicable Version
Applicable Equipment
TNN1AFO1
TNN1D75E
TNN1D12E
N1MD12
N1MD75
34
2 PWE3
Remarks
Connection
type of ATM
services
Issue 03 (2012-11-30)
35
2 PWE3
Applicable
Object
Remarks
ATM CoS
mapping table
UNI port
PW carrying
ATM services
Issue 03 (2012-11-30)
36
2 PWE3
Applicable
Object
Remarks
PW carrying
ATM services
PW carrying
ATM services
Issue 03 (2012-11-30)
37
2 PWE3
Applicable
Object
Remarks
ATM service
protection
scheme
Maintenance Principles
None.
2.2.7 Principles
This section describes the principles of ATM PWE3.
NOTE
The ATM PWE3 in the encapsulation format of one-to-one VPC can be replaced by the ATM PWE3 in the
encapsulation format of N-to-one VPC; the ATM PWE3 in the encapsulation format of one-to-one VCC can be
replaced by the ATM PWE3 in the encapsulation format of N-to-one VCC. This section considers the ATM
PWE3 in the encapsulation format of N-to-one VCC as an example, which functions as a reference for the other
ATM types.
In the scenario as shown in Figure 2-16, the PEs emulate Ethernet services in the encapsulation
format of N-to-one VCC.
Issue 03 (2012-11-30)
38
2 PWE3
VPI=20
VCI=20
CE1
PSN
AC
PW
CE3
AC
AC
PE1
LSP
PE2
VPI=21
VCI=21
MPLS Label
CE2
VPI=11
VCI=11
Extracts the ATM service packets that are transmitted by CE1 and CE2 from the ACs.
2.
Pre-processes the service payloads prior to PWE3, including set-up of the ATM connections
and QoS policies between CE1 and PE1, and between CE2 and PE1.
3.
4.
5.
2.
Decapsulates the PW, and extracts service payloads from the PW.
3.
Issue 03 (2012-11-30)
39
4.
2 PWE3
Selects the AC for forwarding packets, performs corresponding QoS processing, and
forwards the ATM service cells to CE3, based on the created ATM connections and QoS
policies between PE2 and CE3.
Issue 03 (2012-11-30)
Alarm Name
Meaning
ATMPW_LOSPKT_EX
C
40
2 PWE3
Alarm Name
Meaning
ATMPW_MISORDER
PKT_EXC
ATMPW_UNKNOWN
CELL_EXC
Full Name
ATMPW_LOSPKTS
ATMPW_MISORDERPKTS
ATMPW_SNDCELLS
ATMPW_RCVCELLS
ATMPW_UNKNOWNCELLS
Issue 03 (2012-11-30)
41
2 PWE3
Value
Description
Service ID
1 to 4294967295
Default: none
Service Name
UNIs-NNI, UNI-UNI
Default: UNIs-NNI
Issue 03 (2012-11-30)
42
2 PWE3
Field
Value
Description
Connection Type
Protection Type
Default: Unprotected
2.2.11.2 Connection
This section describes the parameters for creating a connection.
Table 2-7 lists the parameters for creating a connection.
Issue 03 (2012-11-30)
43
2 PWE3
Value
Description
Connection Name
Example: 28-N1D12E
Source Board
Source Port
Source VPI
(example:35,36-39)
Source VCI
(example:35,36-39)
Default: none
32 to 65535
Default: none
1 to 4294967295
Default: none
Sink Board
Issue 03 (2012-11-30)
44
2 PWE3
Field
Value
Description
Sink Port
Example: 1(Trunk-1)
Sink VCI(example:
35,36-39)
Default: none
32 to 65535
Default: none
Uplink Policy
Downlink Policy
Issue 03 (2012-11-30)
45
2 PWE3
Value
Description
Port
Example: 36-N1AFO1-1
(PORT-1)
Name
Port Mode
Example: Port1
Layer 2
Encapsulatio
n Type
ATM
Channelize
No
Max Data
Packet Size
(bytes)
Laser
Interface
Status
On, Off
Default: On
Table 2-9 Descriptions of the parameters for SDH interface Layer 2 Attributes
Field
Value
Description
Port
Example: 36-N1AFO1-1
(PORT-1)
Issue 03 (2012-11-30)
46
2 PWE3
Field
Value
Description
Port Type
UNI, NNI
Default: UNI
ATM Cell
Payload
Scrambling
Enabled, Disabled
Default: Enabled
Min. VPI
Max. VPI
Min. VCI
0-65535
Default: 65535
Issue 03 (2012-11-30)
47
2 PWE3
Field
Value
Description
Max. VCI
0-65535
Default: 65535
VCCSupported
VPI Count
0-256, 65535
Default: 65535
Table 2-10 Descriptions of the parameters for Advanced Attributes of the SDH interface
Field
Value
Description
Port
Example: 36-N1AFO1-1
(PORT-1)
Laser
Transmissio
n Distance
(m)
Scrambling
Capability
CRC Check
Length
Clock Mode
Issue 03 (2012-11-30)
48
2 PWE3
Field
Value
Description
Loopback
Mode
Non-loopback, Inloop,
Outloop
Default: Non-loopback
2.3.1 Introduction
This section provides the definition of ETH PWE3 and describes its purpose.
Definition
The ETH PWE3 technology emulates the basic behaviors and characteristics of Ethernet services
on a packet switched network (PSN) by using the PWE3 mechanism, so that the Ethernet services
can be transmitted on a PSN.
Purpose
ETH PWE3 aims to transmit Ethernet services over a PSN. Figure 2-17 shows the typical
application of ETH PWE3.
Figure 2-17 Typical application of ETH PWE3
PSN
PW
CE1
(NodeB)
AC
AC
PE1
Native
Ethernet
service
Issue 03 (2012-11-30)
LSP
CE2
(RNC)
PE2
ETH PWE3
Ethernet frame
Native
Ethernet
service
49
2 PWE3
Packet Format
Figure 2-18 shows the format of an ETH PWE3 packet, consisting of the MPLS label, control
word, and payload.
Figure 2-18 Format of an ETH PWE3 packet
20
0000
23 24
31bit
Tunnel label
EXP
TTL
PW label
EX
P
EXP
S
S
TT
L
TTL
Reserved
Sequence number
Payload
(Ethernet Frame)
MPLS label
Control word (Optional)
Payload
MPLS Label
MPLS labels include tunnel labels and PW labels, which are used to identify tunnels and PWs
respectively. The format of the tunnel label is the same as that of the PW label. For details, see
1.2.4 MPLS Label.
Control Word
The 4-byte control word within an ETH PWE3 packet is optional and contains the following
fields:
l
Issue 03 (2012-11-30)
0000: This field indicates the first 4 bits and they must be set to 0.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
50
2 PWE3
Sequence number: This field has a length of 16 bits and indicates the delivery sequence
number of an ETH PWE3 packet. Its initial value is random, and is increased by one integer
with each ETH PWE3 packet sent.
Payload
The payload refers to the Ethernet frame that is encapsulated into an ETH PWE3 packet. One
ETH PWE3 packet can encapsulate only one Ethernet frame. During the encapsulation, the preset
PW Encapsulation Mode is adopted.
PW Encapsulation Mode
The PW encapsulation mode is used to indicate whether a P-Tag is added when an Ethernet
frame is encapsulated into an ETH PWE3 packet. The PW encapsulation modes are classified
into two categories:
l
Raw mode
In this mode:
When the service-delimiting tag is User, in the direction that an Ethernet frame enters
the PW, the PE directly encapsulates the Ethernet frame into a PWE3 packet after
receiving it from the AC; in the direction that an Ethernet frame leaves the PW, the PE
transmits the decapsulated Ethernet frame to the AC.
When the service-delimiting tag is Service, in the direction that an Ethernet frame enters
the PW, the PE strips the outer tag (P-Tag) of the Ethernet frame and encapsulates it
into a PWE3 packet after receiving it from the AC; in the direction that an Ethernet
frame leaves the PW, the PE adds a P-Tag to the decapsulated Ethernet frame before
transmitting it to the AC.
Tagged mode
In this mode:
When the service-delimiting tag is User, in the direction that an Ethernet frame enters
the PW, the PE adds a P-Tag and encapsulates the Ethernet frame into a PWE3 packet
after receiving it from the AC (the added P-Tag is called request VLAN); in the direction
that an Ethernet frame leaves the PW, the PE strips a P-Tag off the decapsulated Ethernet
frame before transmitting it to the AC.
When the service-delimiting tag is Service, in the direction that an Ethernet frame enters
the PW, the PE swaps P-Tag for U-Tag, and encapsulates the Ethernet frame into a
PWE3 packet after receiving it from the AC (the added P-Tag is called request VLAN);
in the direction that an Ethernet frame leaves the PW, the PE decapsulates the Ethernet
frame, and swaps U-Tag for P-Tag before transmitting it to the AC.
NOTE
For the Hybrid MSTP equipment, you can set a request VLAN value for each PW whose encapsulation
mode is the Ethernet tagged mode, but the T-PID value in the request VLAN must be unique on the NE.
Issue 03 (2012-11-30)
51
2 PWE3
Typical Application
Figure 2-19 shows a NodeB backhaul network.
l
The RNC can process S-VLAN tags. It allocates an S-VLAN ID to each NodeB to separate
the services of a NodeB from those of another.
The NodeB can process C-VLAN tags only. It allocates an C-VLAN ID to each type of
service on a NodeB.
Therefore, the request VLAN function must be enabled to add S-VLAN IDs to isolate the
services on different NodeBs.
Set the request VLAN values on NE1 and NE2 as follows:
l
If the PW1 encapsulation mode of NE1 is the tagged mode, set the request VLAN to 100;
if PW2 encapsulation mode of NE1 is the tagged mode, set the request VLAN to 200.
The PW1 and PW2 encapsulation mode of NE2 is the raw mode.
In the service uplink direction, to transmit the service of NodeB 1 from NE1 to PW1, NE1
adds the request VLAN (S-VLAN) 100 to the service because the PW encapsulation mode
is the tagged mode; to transmit the service from NE2 to the RNC, NE2 decapsulates the
service packet and transparently transmits the S-VLAN tag (100). Likewise, the service of
NodeB 2 carries an S-VLAN tag (200) when transmitted from NE2 to the RNC. In this
case, the services at the same port (PORT1) are isolated.
In the service downlink direction, to transmit the service of the RNC from NE2 to PW1,
NE2 directly adds the S-VLAN tag to the service because the PW encapsulation mode is
the raw mode; to transmit the service from NE1 to NodeB 1, NE1 decapsulates the service
packet and strips the S-VLAN tag. Likewise, the service of the NodeB 2 does not carry an
S-VLAN tag when transmitted from NE1 to NodeB 2.
C-VLAN: 100-200
PSN
NodeB 1
AC
AC
S-VLAN: 100
AC
PW1
NE1
PW2
LSP
AC
RNC
NE2
S-VLAN: 200
C-VLAN: 100-200
NodeB 2
Issue 03 (2012-11-30)
52
2 PWE3
Ingress node
The PHB service class of an ETH PWE3 packet can be manually specified. When a packet
leaves a node, the EXP value of the packet is determined according to the mapping (between
PHB service classes and EXP values) defined by the DiffServ domain of the egress port.
Transit node
When a packet enters a transit node, the PHB service class of the packet is determined
according to the mapping (between EXP values and PHB service classes) defined by the
DiffServ domain of the ingress port. When a packet leaves a transit node, the EXP value
of the packet is determined according to the mapping (between PHB service classes and
EXP values) defined by the DiffServ domain of the egress port.
NOTE
When an MPLS tunnel uses a manually specified EXP value, the EXP value of ETH PWE3 packets is fixed,
not affected by a DiffServ domain.
Service Mode
Table 2-11 defines the model of the E-Line services carried on PWs.
Table 2-11 Model of the E-Line services carried on PWs
Service Model
Service Flow
Service
Direction
Port Mode
Encapsulation
Mode of Port
Service
Description
Model 1
PORT+CVLAN
(source)
UNI-NNI
Layer 2 (source)
IEEE 802.1q
(source)
A UNI port
processes the
packets carrying
a specific CVLAN ID based
on its tag
attribute, and
then sends the
packets to the
NNI side for
transmission on
PWs.
PW (sink)
Issue 03 (2012-11-30)
Layer 3 (sink)
- (sink)
53
2 PWE3
Service Model
Service Flow
Service
Direction
Port Mode
Encapsulation
Mode of Port
Service
Description
Model 2
PORT+SVLAN
(source)
UNI-NNI
Layer 2 (source)
QinQ (source)
Layer 3 (sink)
- (sink)
A UNI port
processes the
packets carrying
a specific SVLAN ID based
on its QinQ
attribute, and
then sends the
packets to the
NNI side for
transmission on
PWs.
Layer 2 (source)
802.1Q or QinQ
(source)
PW (sink)
Model 3
PORT (source)
PW (sink)
UNI-NNI
Layer 3 (sink)
- (sink)
A UNI port
processes the
packets carrying
a specific CVLAN ID based
on its tag
attribute or
QinQ type
domain, and
then sends the
packets to the
NNI side for
transmission on
PWs.
Typical Application
Figure 2-20 shows the typical application of service model 1. Service 1 is present between the
NodeB 1 and the RNC, and service 2 is present between the NodeB 2 and the RNC. The two
services have different VLAN IDs and need to be transmitted over the PSN.
On the UNI side of NE1, service 1 is transmitted to port 1 and service 2 is transmitted to port 2.
On the NNI side of NE1, service 1 and service 2 are transmitted on different PWs. In this manner,
the two services are separately transmitted.
NE2 processes the two services in the same manner as NE1.
Issue 03 (2012-11-30)
54
2 PWE3
Service 1
Port: 1(802.1Q)
VLAN ID: 100
Port: 1(802.1Q)
VLAN ID: 100
PSN
NodeB 1
AC
PW2
LSP
AC
NE1
UNI
NodeB 2
AC
PW1
NNI
AC
NE2
NNI
RNC
UNI
Service 2
Service 2
Port: 2(802.1Q)
VLAN ID: 200
Port: 2(802.1Q)
VLAN ID: 200
Figure 2-21 shows the typical application of service model 2. Service 1 (QinQ service) is present
between NodeB 1 and the RNC, and service 2 (QinQ service) is present between the NodeB 2
and the RNC. The two services have different S-VLAN IDs and need to be transmitted over the
PSN.
On the UNI side of NE1, service 1 is transmitted to port 1 and service 2 is transmitted to port 2.
On the NNI side of NE1, service 1 and service 2 are transmitted on different PWs. In this manner,
the two services are separately transmitted.
NE2 processes the two services in the same manner as NE1.
Figure 2-21 Typical application of service model 2
Service 1
Service 1
Port: 1(QinQ)
S-VLAN ID: 100
Port: 1(QinQ)
S-VLAN ID: 100
PSN
NodeB 1
AC
PW2
LSP
AC
NE1
NodeB 2
AC
PW1
UNI
NNI
Service 2
Port: 2(QinQ)
S-VLAN ID: 200
AC
NE2
NNI
RNC
UNI
Service 2
Port: 2(QinQ)
S-VLAN ID: 200
Issue 03 (2012-11-30)
55
2 PWE3
Figure 2-22 shows the typical application of service model 3. Service 1 (Ethernet service) is
present between the NodeB 1 and the RNC, and service 2 (Ethernet service) is present between
the NodeB 2 and the RNC. Service 1 carries various C-VLAN IDs, and service 2 carries various
S-VLAN IDs. The two services need to be transmitted over the PSN.
On the UNI side of NE1, service 1 is transmitted to port 1 and service 2 is transmitted to port 2.
On the NNI side of NE1, service 1 and service 2 are transmitted on different PWs. In this manner,
the two services are separately transmitted.
NE2 processes the two services in the same manner as NE1.
Figure 2-22 Typical application of service model 3
Service 1
Service 1
Port: 1 (802.1Q)
Port: 1 (802.1Q)
PSN
NodeB 1
AC
PW2
LSP
AC
NE1
NodeB 2
AC
PW1
UNI
AC
NE2
NNI
NNI
RNC
UNI
Service 2
Service 2
Port: 2(QinQ)
Port: 2(QinQ)
2.3.4 Availability
The ETH PWE3 function requires the support of the applicable equipment, boards, and software.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
T2000
56
2 PWE3
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Issue 03 (2012-11-30)
Applicable
Board
Applicable Version
Applicable Equipment
N1PEG16
N1PEX1
N1PETF8
R1PEFS8
Q1PEGS2
R1PEGS1
N1PEG8
N1PEX2
N2PEX1
N1PEFF8
R1PEF4F
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
57
2 PWE3
Issue 03 (2012-11-30)
Applicable
Object
Remarks
Entire feature
ETH PWE3
service for the
OptiX OSN
1500
MAC address
learning mode
MAC address
learning mode
MAC address
table capacity
58
2 PWE3
Remarks
Static/
Dynamic
MAC
addresses
Maintenance Principles
None.
2.3.6 Principles
This section describes the principles of ETH PWE3.
In the scenario as shown in Figure 2-23, the PE devices emulate Ethernet services.
Figure 2-23 Principles of ETH PWE3
PSN
PW
CE1
(NodeB)
AC
AC
PE1
Native
Ethernet
service
LSP
ETH PWE3
CE2
(RNC)
PE2
Native
Ethernet
service
Ethernet frame
Extracts the Ethernet frames that are transmitted by CE1 from the AC.
2.
3.
59
1.
2.
Extracts Ethernet frames from the PWE3 packets carried on the PW.
3.
2 PWE3
Ethernet Layer 2 overhead length = Ethernet frame header length + FCS length
An untagged Ethernet frame header is 14 bytes.
A tagged Ethernet frame header is 18 bytes.
An FCS is 4 bytes.
By default, an Ethernet packet carrying the MPLS packet is tagged. Therefore, the
Ethernet Layer 2 overhead is 22 bytes.
NOTE
l The previous formula computes the payload transmission efficiency, without the consideration of the 20byte interframe gap and preamble. These 20 bytes are omitted in ETH PWE3.
l When ETH PWE3 services are transmitted over radio links or Ethernet links, the ETH PWE3 service
transmission efficiency pertains to the efficiency of physical links transmitting Ethernet frames.
2.4.1 Introduction
This section provides the definition of TDM PWE3 and describes its purpose.
Issue 03 (2012-11-30)
60
2 PWE3
Definition
The TDM PWE3 technology emulates the basic behaviors and characteristics of TDM services
on a packet switched network (PSN) by using the PWE3 mechanism, so that the emulated TDM
services can be transmitted on a PSN.
TDM PWE3 services are also called CES services carried by PWE3. In this document, all the
CES services are carried by PWE3.
Purpose
TDM PWE3 aims to transmit TDM services over a PSN. Especially, the 2.4.2.3 CESoPSN mode
can compress idle timeslots to reduce the required transmission bandwidth.
Figure 2-24 shows the typical application of TDM PWE3. The native TDM services between
the BTS and the BSC are transmitted over a PSN. PE1 emulates the native TDM services from
the BTS into CES services by using the CESoPSN technology. Then, the CES services are
transmitted to PE2 over the PSN. Finally, PE2 restores the CES services to the native TDM
services for transmission to the BSC.
Figure 2-24 Typical application of TDM PWE3 (CESoPSN mode)
Framed E1
Service TS
Idle TS
PSN
LSP
PW
AC
CE1
(BTS)
AC
PE1
Native TDM
service
CE2
(BSC)
PE2
TDM PWE3
Native TDM
service
Framed E1
Aided by the TDM PWE3 technology, conventional TDM networks can be connected by a PSN.
In this manner, PWE3 protects customer investment in TDM networks and constructs all-IP
network architecture.
61
2 PWE3
TS1
TS0
TS2
TS16
TS30
TS31
Timeslot 0
FAS
NFAS
X 0 0 1 1 0 1 1
X 1 A
Sa4-Sa8
Even-numbered
frame
Odd-numbered
frame
As shown in Figure 2-25, the format of timeslot 0 in an odd-numbered frame is different from
that in an even-numbered frame. The signal contained in timeslot 0 of an even-numbered frame
is called frame alignment signal (FAS); the signal contained in timeslot 0 of an odd-numbered
frame is called not frame alignment signal (NFAS), which contains the A-bit indicating remote
alarms and spare bits Sa4 to Sa8. The FAS and NFAS each contain an X-bit. Based on the
function of the X-bit, E1 frames are classified into generic dual-frames and cyclic redundancy
check 4 (CRC-4) multiframes.
l
When the E1 frame is a generic dual-frame, the X-bit functions as the Si-bit.
When the E1 frame is a CRC-4 multiframe, the X-bit is used to transmit CRC-4 multiframe
check signal, CRC-4 check error bits, and multiframe alignment signal (MFAS).
A PCM30 frame uses channel associated signaling (CAS). In this mode, timeslot 0, as a
synchronous timeslot, cannot carry voice; timeslot 16, to carry CAS, cannot carry voice
either. As a result, one E1 frame can carry only 30 voice signals, and it is therefore called
a PCM30 frame.
A PCM31 frame uses common channel signaling (CCS). A multiframe in CCS mode does
not need to transmit CAS. In this mode, except for timeslot 0 that carries synchronous
signals, one E1 frame can carry 31 voice signals and it is therefore called a PCM31 frame.
The two classification methods focus on two attributes of E1 frames, and they can be combined.
Specifically, there are four E1 frame formats in actual application:
l
Issue 03 (2012-11-30)
62
2 PWE3
2.4.2.2 SAToP
Structure Agnostic TDM over Packet Switched Network (SAToP) is a method for encapsulating
TDM serial bit streams as pseudo wires.
SAToP provides the emulation and transport functions for unchannelized TDM services. That
is, it addresses only structure-agnostic transport. Therefore, SAToP can meet the transport needs
when a user needs services based on E1s.
SAToP segments and encapsulates TDM services as serial bit streams, and then transmits the
bit streams in PW tunnels. Although it disregards the TDM frame structure, it supports
transmission of synchronous information.
Figure 2-26 shows the encapsulation format of a SAToP packet.
Figure 2-26 Encapsulation format of a SAToP packet
20
23 24
31bit
Tunnel label
EXP
TTL
PW label
EX
P
EXP
S
S
TT
L
TTL
LEN
Sequence number
MPLS label
RTP header
Control word
TDM data
A SAToP packet contains the MPLS label, control word, RTP header, and TDM data.
MPLS Label
MPLS labels include tunnel labels and PW labels, which are used to identify tunnels and PWs
respectively. The format of the tunnel label is the same as that of the PW label. For details, see
1.2.4 MPLS Label.
Control Word
The control word of a SAToP packet is 4-byte long and contains the following fields:
Issue 03 (2012-11-30)
63
2 PWE3
0000: The 4 bits are generally set to all 0s. They are used to indicate the start of an Associated
Channel Header (ACH). The ACH is needed if the state of the SAToP PW is monitored
using virtual circuit connectivity verification (VCCV).
L: This bit indicates whether the TDM data carried in a packet is valid. If set to 1, it indicates
that the TDM data is omitted in order to conserve bandwidth.
R: This bit indicates whether its local CE-side interworking function (IWF) is in the packet
loss state. If set to 0, it indicates that a preconfigured number of consecutive packets are
received.
LEN: The 6 bits indicate the length of the SAToP packet (including the SAToP header and
TDM data). The minimum length of a transport unit on a PSN is 64 bytes. When a packet
is shorter than 64 bytes, LEN indicates the actual length of the packet, representing padding
bits. If a packet is longer than 64 bytes, LEN is set to all 0s.
NOTE
If LEN is 0, the packet length is supposed to be equal to the default value. If the actual packet length
is different from the default value, the packet is considered as a malformed packet.
Sequence number: The 16 bits indicate the transmission sequence number of a SAToP
packet. Its initial value is random, and is incremented by one with each SAToP data packet
sent. If the sequence number of a packet reaches the maximum (65535), the sequence
number of its next packet will start with the minimum. The sequence number can be in two
modes:
Huawei mode: applies to the scenario where only Huawei equipment composes a
network. In this mode, the minimum sequence number is 0.
Standard mode: applies to the scenario where Huawei equipment networks with thirdparty equipment. In this mode, the minimum sequence number is 1.
RTP Header
The Real-time Transport Protocol (RTP) header is used to carry timestamp information to the
remote end so that the packet clock can be restored. The RTP header is 12-byte long. The 32bit timestamp field in the RTP header represents the timestamp information.
Figure 2-27 shows the RTP header format.
Issue 03 (2012-11-30)
64
2 PWE3
Set Version (V) to 2. Set Padding (P), Header Extension (X), CSRC Count (CC), and Marker
(M) to 0.
Payload type (PT):
l
PT values are allocated to each direction of a PW, and the PT values are in a dynamic value
range. The receive and transmit directions of a PW can share a PT value. Different PWs
can share a PT value.
The PE on the upstream PW puts the allocated PT value in the PT field of the RTP header.
The PE on the upstream PW can detect exceptional packets according to the received PT
value.
The sequence number must be the same as the serial number in the SAToP control word.
The timestamp is used for carrying the time information in the network. Two timestamp
generation modes are as follows:
l
Absolute mode: The TDM circuit can restore clock information and upstream PE sets the
timestamp according to the clock. The timestamp is closely related to the serial number.
All equipment supporting CESoPSN must support the absolute mode.
The synchronization source identifiers are used for detecting errored connections.
NOTE
On the Hybrid MSTP equipment, you can set whether the RTP header is encapsulated into the SAToP
packet.
TDM Data
"TDM data" indicates the TDM data payload in the form of serial bit stream. When a PW packet
is shorter than 64 bytes, fixed bits are padded to meet Ethernet transmission requirements.
The amount of E1 bit streams that are encapsulated in a PW packet is determined by Packet
Loading Time. Packet Loading Time indicates the duration for a PW packet to load TDM bit
streams. Regarding that the number of loaded TDM bit streams is equal to Packet Loading
Time multiplied by the E1 rate, Packet Loading Time limits the number of loaded TDM bit
streams. For instance, when the packet loading time is 1 ms, each PW packet can load 2048-bitlong E1 payloads.
Issue 03 (2012-11-30)
65
2 PWE3
2.4.2.3 CESoPSN
Circuit Emulation Service over Packet Switched Network (CESoPSN) is a method for
encapsulating TDM frames as pseudo wires.
CESoPSN provides the emulation and transport functions for channelized TDM services. That
is, it identifies the TDM frame format and signaling in the frame. Therefore, CESoPSN can meet
the transport needs when a user needs services based on timeslots.
With the frame format of the TDM service identified, CESoPSN does not transmit idle timeslot
channels; instead, CESoPSN extracts only the usable timeslots from the service flow and then
encapsulates these timeslots as PW packets for transmission.
NOTE
CESoPSN does not support the automatic detection of idle timeslots. Therefore, idle timeslots must be
manually specified.
20
0000 L R
23 24
31bit
Tunnel label
EXP
TTL
PW label
EX
P
EXP
S
S
TT
L
TTL
FRG
LEN
Sequence number
Timeslot 2
Timeslot 3
Timeslot 4
Timeslot 4
RTP header
Control word
TDM data
A CESoPSN packet contains the MPLS label, control word, RTP header, and TDM data.
Issue 03 (2012-11-30)
66
2 PWE3
MPLS Label
MPLS labels include tunnel labels and PW labels, which are used to identify tunnels and PWs
respectively. The format of the tunnel label is the same as that of the PW label. For details, see
1.2.4 MPLS Label.
Control Word
The control word of a CESoPSN packet is 4-byte long and contains the following fields:
l
0000: The 4 bits are generally set to all 0s. They are used to indicate the start of an Associated
Channel Header (ACH). The ACH is needed if the state of the CESoPSN PW is monitored
using virtual circuit connectivity verification (VCCV).
L: This bit indicates whether the TDM data carried in a packet is valid. If set to 1, it indicates
that the TDM data is omitted in order to conserve bandwidth.
R: This bit indicates whether its local CE-side interworking function (IWF) is in the packet
loss state. If set to 0, it indicates that a preconfigured number of consecutive packets are
received.
M: The 2 bits are used for alarm transparent transmission, indicating that the CE end or AC
side of the uplink PE detects a critical alarm.
LEN: The 6 bits indicate the length of the CESoPSN packet (including the CESoPSN header
and TDM data). The minimum length of a transport unit on a PSN is 64 bytes. When a
packet is shorter than 64 bytes, LEN indicates the actual length of the packet, representing
padding bits. If a packet is longer than 64 bytes, LEN is set to all 0s.
NOTE
If LEN is 0, the packet length is supposed to be equal to the default value. If the actual packet length
is different from the default value, the packet is considered as a malformed packet.
Sequence number: The 16 bits indicate the transmission sequence number of a CESoPSN
packet. Its initial value is random, and is incremented by one with each CESoPSN data
packet sent. If the sequence number of a packet reaches the maximum (65535), the sequence
number of its next packet will start with the minimum. The sequence number can be in two
modes:
Huawei mode: applies to the scenario where only Huawei equipment composes a
network. In this mode, the minimum sequence number is 0.
Standard mode: applies to the scenario where Huawei equipment networks with thirdparty equipment. In this mode, the minimum sequence number is 1.
RTP Header
The Real-Time Transport Protocol (RTP) header is used to carry timestamp information to the
remote end so that the packet clock can be restored. The RTP header is 12-byte long. The 32bit timestamp field in the RTP header represents the timestamp information.
Issue 03 (2012-11-30)
67
2 PWE3
Set Version (V) to 2. Set Padding (P), Header Extension (X), CSRC Count (CC), and Marker
(M) to 0.
Payload type (PT):
l
PT values are allocated to each direction of a PW, and the PT values are in a dynamic value
range. The receive and transmit directions of a PW can share a PT value. Different PWs
can share a PT value.
The PE on the upstream PW puts the allocated PT value in the PT field of the RTP header.
The PE on the upstream PW can detect abnormal packets according to the received PT
value.
The sequence number must be the same as the serial number in the CESoPSN control word.
The timestamp is used for carrying the time information in the network. Two timestamp
generation modes are as follows:
l
Absolute mode: The TDM circuit can restore clock information and upstream PE sets the
timestamp according to the clock. The timestamp is closely related to the serial number.
All equipment supporting CESoPSN must support the absolute mode.
The synchronization source identifiers are used for detecting errored connections.
NOTE
On the Hybrid MSTP equipment, you can set whether the RTP header is encapsulated into the CESoPSN
packet.
TDM Data
"TDM data" indicates TDM data payloads. When a PW packet is shorter than 64 bytes, fixed
bits are padded to meet Ethernet transmission requirements.
"Timeslot" indicates the timeslot in TDM frames. Each timeslot uses 8 bits. All the timeslots
are encapsulated as TDM data payloads (excluding the CRC bit). The number of encapsulated
frames and the number of timeslots in each frame can be set as required.
Issue 03 (2012-11-30)
68
2 PWE3
NOTE
On the Hybrid MSTP equipment: CESoPSN does not encapsulate timeslot 0 of E1 into the payload, and
the remote PE restructure the timeslots.
The amount of E1 frames that are encapsulated in a PW packet is determined by Packet Loading
Time. Packet Loading Time indicates the duration for a PW packet to load TDM frames, and
therefore limits the number of loaded TDM frames. The period of a TDM frame is 125 s. As
a result, if the packet loading time is 1 ms, each PW packet loads eight TDM frames.
Upon receipt of a CES packet, the PE computes the offset address of the packet based on
the packet sequence number. The offset address equals the arithmetical compliment after
the sequence number is divided by the buffer size. For example, when the jitter buffer time
is 8 ms and the packet loading time is 1 ms, the buffer size is 8 (= 8 ms/1 ms). The offset
address of a CES packet equals the arithmetical compliment after the sequence number is
divided by 8.
2.
The CES packet is saved at a position corresponding to the offset address in the buffer.
3.
After the jitter buffer time, the PE sends packets in the buffer in the sequence number order.
If a packet with a certain sequence number is missing, an idle code is inserted.
NOTE
The size of the data jitter buffer can be set as required. A low-capacity jitter buffer easily overflows, and
as a result data may be lost at different degrees; a high-capacity jitter buffer can absorb jitters resulting
from larger packet transmission intervals on the network, but a large delay may be generated when the
TDM bit streams are reconstructed. Therefore, during service deployment, you need to properly configure
the data jitter buffer based on the actual network delay and jitter conditions.
Jitter buffer
2
Egress queue
Jitter buffer
Issue 03 (2012-11-30)
Egress queue
69
2 PWE3
The fault information on the AC link or port is transmitted through the PSN as follows:
PSN
AC1
NodeB
AC2
PW
PE1
LSP
RNC
PE2
If PE1 detects a link fault or an E1 port fault on the AC1 side, PE1 returns RDI to upstream
and informs PE2 of the fault through the L field of control word. Upon receiving the control
word, PE2 reports the CESPW_OPPOSITE_ACFAULT alarm and inserts AIS to the AC2
side.
l
The service alarms on the AC side are transparently transmitted through the PSN as follows:
PSN
AC1
NodeB
AC2
PW
PE1
LSP
PE2
RNC
Issue 03 (2012-11-30)
70
2 PWE3
If the RNC detects a link fault or an E1 port fault on the AC2 side, the RNC returns RDI
to PE2; PE2 reports the RAI alarm and informs PE1 of the fault through the L/M field of
control word. Upon receiving the control word, PE1 reports the RAI alarm and returns RDI
to AC1.
NOTE
The SAToP encapsulation mode does not support the M field, and therefore cannot transparently
transmit the RAI alarm.
CES Alarm Transparent Transmission from the NNI Side to the AC Side
Figure 2-33 shows the CES alarm transparent transmission from the NNI side to the AC side.
Figure 2-33 CES alarm transparent transmission from the NNI side to the AC side
PSN
AC1
NodeB
AC2
PW
PE1
LSP
PE2
RNC
When detecting that packet loss ratio continuously beyond the preset threshold, PE2 inserts the
AIS alarm into AC2, and uses the R field in the control word to transmit the information to PE1.
Then, PE1 reports the RAI alarm based on the R field.
CES Retiming
CES retiming is an approach to reduce signal jitter after CES services traverse a transmission
network. It combines the timing reference signal and CES service signal for transmission.
Therefore, the transmitted CES service signal carries the timing information that is synchronized
with the timing reference signal. CES retiming is applicable when the following conditions are
met:
l
All the clocks on the PSN are synchronized with the clock of the incoming service.
Figure 2-34 shows a clock solution wherein the BSC transmits synchronization information to
the BTS over the PSN with CES retiming enabled. In this solution:
l
Issue 03 (2012-11-30)
PE1 receives an E1 service from the BSC, and extracts the clock from the E1 service (the
E1 service is emulated into the CES service after entering the PSN). The extracted clock
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
71
2 PWE3
functions as the primary reference clock, and is traced by the other NEs on the PSN. In this
manner, all the clocks on the PSN are synchronized with the clock of the BSC.
l
CES retiming is enabled on PE2 so that the system clock of PE2 can be carried in the E1
service sent to the BTS (the CES service is restored to the E1 service after leaving the PSN).
The clock of PE2 is synchronized with that of the BSC, so the BTS can extract the clock
of the BSC from the received E1 service.
PSN
CE1
(BSC)
E1
service
PE1
LSP
PE2
E1
service
CE2
(BTS)
E1
service
FIFO
E1
service
CES retiming is implemented as follows: The E1 bit streams restored from the CES service are
written into a First In, First Out (FIFO) queue, and then are read out from the FIFO queue by
using the retiming clock. The output signal contains the retiming clock; therefore, it is
synchronized with the primary reference clock, with the jitter and wander in the original E1
service absorbed by the data jitter buffer.
CES ACR
CES ACR is a technology wherein the CES service is used to restore the clock of the source end
in an adaptive manner. The sink end recovers the clock based on the packet received on its NNI
side.
l
All the clocks on the PSN are synchronous, but the clocks on the PSN are not synchronized
with the clock of the incoming service.
For the principle and implementation process of CES ACR, see 20.4 CES ACR.
72
2 PWE3
Ingress node
The PHB service class of a TDM PWE3 packet can be manually specified (the PHB service
class is set to EF, by default). When a packet leaves an ingress node, the EXP value of the
packet is determined according to the mapping (between PHB service classes and EXP
values) defined by the DiffServ domain of the egress port.
Transit node
When a packet enters a transit node, the PHB service class of the packet is determined
according to the mapping (between EXP values and PHB service classes) defined by the
DiffServ domain of the ingress port. When a packet leaves a transit node, the EXP value
of the packet is determined according to the mapping (between PHB service classes and
EXP values) defined by the DiffServ domain of the egress port.
NOTE
When an MPLS tunnel uses a manually specified EXP value, the EXP value of TDM PWE3 packets is fixed,
not affected by a DiffServ domain.
In addition, the Hybrid MSTP equipment supports the CES CAC function. If bandwidth
resources are insufficient when CES services are created, the services cannot be created and the
system will display a prompt message.
NOTE
l To enable the CES CAC function, set the bandwidth of tunnel carrying CES services, and PW
bandwidth of other PWE3 services carried on the tunnel.
2.4.3 Specifications
This section describes the capacity of the OptiX OSN equipment for supporting TDM PWE3.
Table 2-12 provides the specifications associated with TDM PWE3.
Table 2-12 Specifications associated with TDM PWE3
Item
Specifications
Emulation mode
l SAToP
l CESoPSN
Issue 03 (2012-11-30)
73
2 PWE3
Item
Specifications
Clock mode
l CES ACR
l CES retiming
31
IETF RFC 4197: Requirements for Edge-to-Edge Emulation of Time Division Multiplexed
(TDM) Circuits over Packet Switching Networks
IETF RFC 4553: Structure-Agnostic Time Division Multiplexing (TDM) over Packet
(SAToP)
IETF RFC 5086: Structure-Aware Time Division Multiplexed (TDM) Circuit Emulation
Service over Packet Switched Network (CESoPSN)
ITU-T G.704: Synchronous frame structures used at 1544, 6312, 2048, 8448 and 44 736
kbit/s hierarchical levels
2.4.5 Availability
The TDM PWE3 function requires the support of the applicable equipment, boards, and
software.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
74
2 PWE3
Applicable Equipment
Applicable Version
U2000
Hardware Support
Applicable
Board
Applicable Version
Applicable Equipment
N1CQ1
N1MD12
N1MD75
R1ML1
TNN1CO1
TNN1D75E
TNN1D12E
Issue 03 (2012-11-30)
Applicable
Object
Remarks
CES service
CES service
75
2 PWE3
Applicable
Object
Remarks
Emulation
mode
CES service
protection
Clock
ACR clock
The TNN1D75E/TNNID12E/
N1MD75/N1MD12 board supports
32xE1 ports.
Issue 03 (2012-11-30)
76
2 PWE3
Applicable
Object
Remarks
ACR clock
The TNN1D75E/TNNID12E/
N1MD75/N1MD12 board supports
four ACR clocks.
Service
bandwidth
Service
bandwidth
Remarks
VCCV
Transparent
transmission
of CES alarm
Jitter buffer
time, packet
loading time
Maintenance Principles
None.
Issue 03 (2012-11-30)
77
2 PWE3
2.4.7 Principles
This section describes the principles of TDM PWE3.
As shown in Figure 2-35, the PE device uses 2.4.2.3 CESoPSN to emulate native TDM services.
2.4.2.2 SAToP uses a similar encapsulation process, but does not identify the E1 frame format
or process the timeslots of the E1 frame.
Figure 2-35 Principles of TDM PWE3 services (CESoPSN mode)
Framed E1
Service TS
Idle TS
PSN
LSP
PW
AC
CE1
(BTS)
AC
PE1
Native TDM
service
CE2
(BSC)
PE2
TDM PWE3
Framed E1
Native TDM
service
Extracts the E1 bit streams that are transmitted by CE1 from the AC.
2.
Segments the E1 bit streams, with a specified number of E1 frames contained in each
segmentation.
3.
Extracts valid payloads from the specified timeslots in each segment, and encapsulates the
valid payloads into a PWE3 packet in standard format.
4.
2.
Extracts the valid payloads from the PWE3 packets carried on the PW.
3.
Restores E1 frames based on the valid payloads (the unused timeslots are padded based on
the preconfigured 8-bit pattern), and reconstructs the E1 bit streams.
NOTE
Timeslot 0 is not padded based on the preconfigured 8-bit pattern, but is regenerated based on the
preconfigured PCM format.
4.
5.
Issue 03 (2012-11-30)
78
3 ATM OAM
ATM OAM
Issue 03 (2012-11-30)
79
3 ATM OAM
3.1 Introduction
This section provides the definition of ATM OAM and describes its purpose.
Definition
ATM OAM is used for detecting and locating ATM faults, and monitoring ATM performance.
In this document, ATM OAM refers to OAM only at the ATM layer and implements various
OAM functions by means of specific ATM OAM cells.
Purpose
ATM OAM provides segment-based ATM OAM between the CE and the PE and end-to-endbased ATM OAM between CEs.
As shown in Figure 3-1, ATM OAM cells are transmitted and detected between the CE and the
PE, or between the CEs to monitor the ATM link.
Figure 3-1 Typical application of ATM OAM
CE1
(NodeB)
PE2
PE1
CE2
(RNC)
Segment check
End-to-end check
80
3 ATM OAM
Level
Description
F1
Regenerator
section level
F2
Digital section
level
F3
Transmission
path level
F1, F2, and F3 are OAM flows at the physical layer. The
generation of OAM flows at the physical layer and the
implementation of the OAM functions depend on the
transport mechanism of the transmission system. Three
types of transmission can be provided on ATM networks:
l SDH-based transmission system
l Cell-based transmission system
l PDH-based transmission system
F4
Virtual path
level
F5
Virtual channel
level
End point: refers to the end point for an ATM network connection or generally refers to
the edge on an ATM network.
Segment point: refers to the end point of a segment. One ATM link consists of multiple
segments.
Issue 03 (2012-11-30)
81
3 ATM OAM
Intermediate point: refers to the OAM node between segment points or end points.
Therefore, intermediate points can be further classified into intermediate points between
segment points and intermediate points between end points.
The segment and end attributes are set to intercept ATM OAM cells in the expected direction
and location.
Figure 3-2 shows the segment and end attributes of CPs.
Figure 3-2 Segment and end attributes of CPs
End points
Segment-end Points
Segment Points
Intermediate Points
Directions of CPs
The directions of CPs are classified into forward and backward directions.
Figure 3-3 shows the directions of CPs.
Figure 3-3 Directions of CPs
End points
Segment Points
Intermediate Points
Packet Transmission Equipment
Backward direction
Source
Sink
Forward direction
ATM connection
Direction
Issue 03 (2012-11-30)
82
3 ATM OAM
AIS/RDI
AIS cells are used for reporting defect indications in the forward direction. Table 3-2 lists the
types and functions of AIS cells.
Table 3-2 AIS/RDI functions
Cell Generation and
Transmission Mechanism
CC
CC is used for continuously monitoring link continuity. With the CC function, unexpected
interruption of a link (the link is intermittently disconnected) during a continuous period can be
detected. Table 3-3 lists the types and functions of CC cells.
Issue 03 (2012-11-30)
83
3 ATM OAM
Description of CC
l seg_CC is responsible
for transmitting or
receiving segment CC
cells.
l ete_CC is responsible
for transmitting or
receiving end-to-end
CC cells.
LB
LB is used for detecting link continuity and locating faults. LB cells can be inserted at one
location on which the segment and end attribute is set along a virtual connection and looped
back at different downstream locations on which the segment and end attributes are set,
eliminating the need to interrupt services.
Table 3-4 LB functions
Loopback Location Identifier (LLID)
Loopback
Indication (LI)
Segment and
End Attribute of
LB
Test Result
LI is used to
identify the
loopback status of
LB cells.
l LI = 1: indicates
that LB cells are
not looped
back.
l LI = 0: indicates
that LB cells are
looped back.
Success/Failure
NOTE
Actually, the LLID field can be set to any value. The
LLID field is designed according to the second
coding mode (0x01) specified in ITU-T I.610. For
this reason, you can enter 15 bytes, which consist of
2-byte country code (the default value is 0000 in the
hexadecimal system), 2-byte network code (the
default value is 0000 in the BCD code pattern), and
11-byte NE code (by default, the first four bytes use
the NE ID and the last seven bytes are 0s). In addition,
once the LLID field is set, the LLID field does not
vary with the NE ID even if the NE ID changes. The
last byte of the LLID field is unused.
Issue 03 (2012-11-30)
NOTE
If the test fails, the
error code and
failure cause are
displayed; if the test
is successful, the
message indicating
a successful test is
displayed. The LB
test result can be
found in the
network
management
events.
84
3 ATM OAM
In F4 flows, the VCI value of a segment OAM cell is 3 (VCI = 3) and the VCI value of an endto-end OAM cell is 4 (VCI = 4).
In F5 flows, the payload type identifier (PTI) value of a segment OAM cell is 4 (PTI = 4), and
the PTI value of an end-to-end OAM cell is 5 (PTI = 5).
3.4 Availability
This section describes the support required by the application of the ATM OAM feature.
Version Support
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Issue 03 (2012-11-30)
Applicable Board
Applicable Version
Applicable Equipment
TNN1AFO1
TNN1D75E
TNN1D12E
N1MD12
N1MD75
85
3 ATM OAM
Remarks
ATM OAM
Issue 03 (2012-11-30)
ATM OAM
ATM OAM
ATM OAM
Segment and
end attributes
Loopback
(LB) test
86
3 ATM OAM
Applicable
Object
Remarks
NE code
Remarks
Remote LB
test
l Before a segment-to-segment
LB test, set a segment point or
non-segment-end point at one
segment of the test domain. After
the test, remove the segment
point or non-segment-point.
Maintenance Principles
None.
3.6 Principles
The functions of ATM OAM are implemented by means of specific ATM OAM cells.
3.6.1 AIS/RDI
This section provides an example wherein an intermediate point detects a VPC/VCC fault to
describe how the AIS/RDI functions are implemented.
NOTE
VPC fault detection and VCC fault detection have the same principles. The following part uses a VPC fault
detection process as an example to describe the principles of AIS/RDI functions.
As shown in Figure 3-4, on the ATM network, the end points are set up in the backward direction
of NE1 and in the forward direction of NE5, the segment points are set up in the backward
direction of NE2 and in the forward direction of NE4, and NE3 is the intermediate point. The
Issue 03 (2012-11-30)
87
3 ATM OAM
VPC from NE2 to NE3 breaks due to a fault. In this case, the AIS/RDI function is implemented
as follows:
l
NE3 immediately reports the LCD alarm. Since NE3 is an intermediate point, it
immediately inserts seg_VP-AIS and e-t-e_VP-AIS cells in the forward direction. Then, it
continues to periodically transmit these cells.
After the forward segment point of NE4 captures seg_VP-AIS cells, NE4 immediately
reports the VP-AIS status. Meanwhile, the forward segment point of NE4 immediately
transmits seg_VP-RDI cells to the upstream.
NOTE
After the forward segment point of NE5 captures e-t-e_VP-AIS cells, NE5 immediately
reports the VP-AIS status. Meanwhile, the forward segment point of NE5 immediately
transmits e-t-e_VP-RDI cells to the upstream.
Since NE3 is an intermediate point, it does not capture any RDI cell.
After the backward segment point of NE2 captures seg_VP-RDI cells from NE4, NE2
immediately reports the VP-RDI status.
NOTE
The backward segment point of NE2 does not capture e-t-e_VP-RDI cells.
After the backward end point of NE1 captures e-t-e_VP-RDI cells from NE5, NE1
immediately reports the RDI status.
NE1
end point
NE5
end point
seg_VP-AIS
cell
NE3
intermediate
point
e-t-e_VP-AIS
cell
NE2
segment
point
seg_VP-RDI cell
NE4
segment point
e-t-e_VP-RDI cell
Reporting the
RDI status
Reporting the
RDI status
VPC
VPC direction
Issue 03 (2012-11-30)
88
3 ATM OAM
NOTE
If the ATM trunks at both ends of NE2 and NE3 are enabled with the IMA protocol, a unidirectional VPC
interruption from NE2 to NE3 will result in a bidirectional VPC interruption between NE2 and NE3 because
the OptiX OSN equipments support only Symmetrical Mode and Symmetrical Operation. In this case, NE1
and NE2 as shown in Figure 3-4 cannot capture any RDI cells, and will not report any RDI status.
3.6.2 CC
This section provides an example to describe the CC function at the segment and end-to-end
levels.
As shown in Figure 3-5, on the ATM network, the ATM link is enabled with the CC function
at the segment and end-to-end levels. Wherein, NE1 initiates an end-to-end CC test to NE5, and
NE2 initiates a segment CC test to NE4. CC cells are then continuously transmitted between
end-to-end points, and between segment points. The VPC between NE3 and NE4 breaks due to
a fault. In this case, the CC function is implemented as follows:
l
NE4 fails to receive user cells or seg_CC cells, so NE4 immediately reports the VP_LOC
alarm and transmits VP_AIS cells to the downstream.
NE5 receives only VP_AIS cells, but fails to receive e-t-e_CC cells or user cells. Therefore,
NE5 immediately reports the VP_AIS status.
NE5
end point
NE3
intermediate
point
NE2
segment point
NE4
segment point
Sending the
seg_CC cell
Sending the
e-t-e_CC cell
Receiving the
seg_CC cell
Receiving the
e-t-e_CC cell
CC
Issue 03 (2012-11-30)
89
3 ATM OAM
3.6.3 LB
This section provides an example wherein the LB function is enabled on segment points to
describe how the LB function is implemented.
As shown in Figure 3-6, segment points are set up in the backward direction of NE2 and in the
forward direction of NE4, and NE3 is the intermediate point.
1.
When the backward segment point of NE2 initiates an LB test to NE3, it inserts a seg_LB
cell whose LLID value is the same as that of NE3; after the backward segment point of
NE2 initiates an LB test to NE4, it inserts a seg_LB cell whose LLID value is the same as
that of NE4. Before a cell is looped back, its LI value is equal to 1.
2.
When the intermediate point (NE3) receives a seg_LB cell, it first analyzes whether the
LLID value of the cell is the same as the local LLID value.
l If the LLID values are different, the cell is transmitted to the downstream without being
processed.
l If the LLID values are the same, NE3 analyzes whether the LI value of the cell is equal
to 1. If the LI value of the cell is equal to 1, the cell is not looped back yet. NE3 then
sets the LI value to 0 and returns the cell, so that the backward segment point of NE2
can receive the returned cell within 6 (+/-1) seconds. If the LI value of the cell is equal
to 0, the cell is transmitted to the downstream without being processed.
3.
When the forward segment point of NE4 receives a seg_LB cell, it first analyzes whether
the LLID value of the cell is the same as the local LLID value.
l If the LLID values are different, the cell is discarded.
l If the LLID values are the same, the forward segment point of NE4 analyzes whether
the LI value of the cell is equal to 1. If the LI value of the cell is equal to 1, the cell is
not looped back yet. The forward segment point of NE4 then sets the LI value to 0 and
returns the cell, so that the backward segment point of NE2 can receive the returned cell
within 6 (+/-1) seconds. If the LI value of the cell is equal to 0, the cell is discarded.
4.
Issue 03 (2012-11-30)
If an LB cell fails to be looped back to the backward segment of NE2 within 6 (+/-1) seconds
seconds, the failure of the LB test is reported.
90
3 ATM OAM
LLID="NE4", LI="1"
NE5
end-to-end point
LLID="NE3", LI="1"
NE3
intermediate
point
NE4
segment point
NE2
segment point
LLID="NE3", LI="0"
LLID="NE4", LI="0"
Prerequisites
l
Issue 03 (2012-11-30)
91
3 ATM OAM
Background Information
The segment end attribute of ATM connection points (CPs) contains Segment point,
Endpoint, Segment and Endpoint, and Non segment and Endpoint.
The ways that the port handles the OAM cell in different segment end attributes are described
as follows:
l
If the segment end attribute is set to Segment point, only the segment-to-segment OAM
cells at the segment can be terminated.
If the segment end attribute is set to Endpoint or Segment and Endpoint, the end-to-end
OAM cells and segment-to-segment OAM cells at the segment and end can be terminated.
If the segment end attribute is set to Non segment and Endpoint, the OAM cell is not
terminated.
CAUTION
You cannot set OAM segment endpoint or activate CC at the protection connection to which 1
+1 source or 1+1 sink protection group is applied. In addition, you cannot set segment end
attribute at the connection that is added to a protection group.
Procedure
Step 1 In the NE Explore, select the NE and choose Configuration > Packet Configuration > ATM
OAM Management from the Function Tree. Click the Segment End Attributes tab.
Step 2 Set the Segment and End Attribute of the connection point.
Step 3 Click Apply. A dialog box is displayed, indicating that the operation succeeded.
Step 4 Click Close.
----End
Prerequisites
l
Background Information
After you activate the CC check at the source and sink ends of a service, the source end
periodically builds and sends a CC cell. If the sink end does not receive the CC cell from the
source, it automatically reports the LOC alarm, and inserts the corresponding AIS cells
downstream.
Issue 03 (2012-11-30)
92
3 ATM OAM
CAUTION
When activating the CC, activate the source and sink ends almost at the same time in a short
period. You are recommended to activate the sink prior to the source. Otherwise, the NE may
report a timeout event.
Procedure
Step 1 In the NE Explore, select the NE and choose Configuration > Packet Configuration > ATM
OAM Management from the Function Tree. Then, click the CC Activation Status tab.
Step 2 Set the CC Activate Flag of the connection point and the Segment and End Attribute of the
CC cell.
NOTE
l Segment and End Attribute: Sets the segment end attribute of the CC cell. It corresponds to the segment
end attribute of the connection point. The CC cell terminates at the connection point of the same segment
attribute.
l After CC Activate Flag is activated, the CC check is started.
Step 3 Click Apply. A dialog box is displayed, indicating that the operation succeeded.
Step 4 Click Close.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Background Information
The LB test recognizes an NE by LLID, so the LLID must be unique on the network. After the
LLID is set for an NE, the LLID value is sent to all the boards to keep consistency of LLID
values on all the boards.
Procedure
Step 1 In the NE Explore, select the NE and choose Configuration > Packet Configuration > ATM
OAM Management from the Function Tree. Then, click the LLID tab.
Step 2 Set the Country Code, Network Code, and NE Code.
Step 3 Click Apply. A dialog box is displayed, indicating that the operation succeeded.
Step 4 Click Close.
----End
Issue 03 (2012-11-30)
93
3 ATM OAM
Prerequisites
l
Background Information
During a remote loopback check, the source end builds an LB cell, and starts the timer. If the
sink end receives the LB cell, it sends the cell back to the source. If the source end detects a
returned LB cell in a specified time, the loopback succeeded. Otherwise, the loopback fails.
Procedure
Step 1 In the NE Explore, select the NE and choose Configuration > Packet Configuration > ATM
OAM Management. Click the Remote Loopback Test tab.
Step 2 Set the Loopback Point NE and Segment and End Attribute.
NOTE
Segment End Attribute of ATM service specifies the type of the transmitted OAM cells during an LB test.
l If Segment End Attribute is set to Segment Point, seg_LB cells is transmitted.
l If Segment End Attribute is set to Endpoint, e-t-e_LB cells is transmitted.
Issue 03 (2012-11-30)
1.
2.
Right-click Reporting of LB status information. Then, choose Details ... from the
shortcut menu. The detailed information is displayed. Then, check the additional
information to confirm the test result.
94
3 ATM OAM
l If the test result is Test Failed, an error code and failure cause are displayed.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 In the NE Explorer, click the NE and choose Configuration > Packet Configuration > ATM
OAM Management. Click the Insert OAM Cell to ATM tab.
Step 2 Set Insert OAM Cell to ATM to Enabled. After the setting, AIS/RDI cells can be generated
at any connection point where a fault is detected and is transmitted to a remote end.
Step 3 Click Apply. The Operation Result dialog box is displayed, indicating that the operation
succeeded.
----End
Issue 03 (2012-11-30)
95
3 ATM OAM
35-N1D12E-1(PORT-1)
Node B
NE1
RNC
NE2
PW
Service Planning
For ATM services, the segment and end attributes, CC activation status or remote loopback test
of each connection needs to be set. In this case, the ATM service has only one connection. The
related parameters need to set in the forward directions of the source NE and sink NE. The
forward direction is from the user side to the network side, that is, the directions from the NodeB
to NE1 and from the RNC to NE2 as shown in Figure 3-7. The service shown in Figure 3-7 is
taken as an example. Table 3-5 shows the parameters planned for ATM OAM.
Table 3-5 Parameters planned for ATM OAM
Attribute
Value
NE
NE1
NE2
Source
35-N1D12E-1
(PORT-1)-32-33
32-N1AFO1-1
(PORT-1)-52-53
Sink
PW(ATM n to one
VCC cell transport,
20)-32-33
Connection
Direction
Sink
Source
Segment point
Segment point
Source
35-N1D12E-1
(PORT-1)-32-33
32-N1AFO1-1
(PORT-1)-52-53
Sink
PW(ATM n to one
VCC cell transport,
20)-32-33
Connection
Direction
Sink
Source
Segment point
Segment point
Segment and
End
Attributes
CC
Activation
Status
Issue 03 (2012-11-30)
96
Attribute
LLID
Remote
Loopback
Test
3 ATM OAM
Value
CC Activate Flag
Source Activate
Sink Activate
Country Code
(Hexadecimal
Code)
[2BYTE]00 00
[2BYTE]00 00
Network Code
(Hexadecimal
Code)
[2BYTE]00 00
[2BYTE]00 00
NE Code
(Hexadecimal
Code)
[11BYTE]00 09 00 02
00 00 00 00 00 12 00
[11BYTE]00 09 00 02 00 00
00 00 00 12 00
Source
35-N1D12E-1
(PORT-1)-32-33
Sink
PW(ATM n to one
VCC cell transport,
20)-32-33
Connection
Direction
Sink
Segment point
Loopback Point
NE
NE2
Test Result
Prerequisites
You must be an NM user with NE operator authority or higher.
An ATM service must be created.
Procedure
Step 1 On the U2000, set segment and end attributes for NE1 and NE2. For the setting method, see
Setting Segment End Attribute.
Set the following parameters that are related to the segment and end attributes of NE1:
l Source: 35-N1D12E-1(PORT-1)-32-33
l Sink: PW(ATM n to one VCC cell transport,20)-32-33
Issue 03 (2012-11-30)
97
3 ATM OAM
98
3 ATM OAM
If a remote loopback test needs to be initiated on NE2 and performed on NE1, configure the test on NE2. The
configuration method is the same as that used on NE1.
----End
Issue 03 (2012-11-30)
Alarm Name
Meaning
VP_AIS
The VP_AIS is an alarm indication signal for the virtual path (VP).
This alarm occurs when a forward or backward VP that is
configured with the segment end point attribute receives the AIS
cells, indicating that the upstream services are incorrect.
VP_LOC
VP_RDI
VC_AIS
VC_LOC
99
3 ATM OAM
Alarm Name
Meaning
VC_RDI
Value
Description
Source
l UNIs-NNI: PW(type)-VPIVCI
For example: PW(ATM n to
one VCC cell transport,
16385)-10-66
Issue 03 (2012-11-30)
100
3 ATM OAM
Field
Value
Description
Connection Direction
Source, Sink
Value
Description
Source
Issue 03 (2012-11-30)
101
3 ATM OAM
Field
Value
Description
Sink
l UNIs-NNI: PW(type)-VPIVCI
For example: PW(ATM n to
one VCC cell transport,
16385)-10-66
Connection Direction
Source, Sink
Issue 03 (2012-11-30)
102
3 ATM OAM
Field
Value
Description
CC Activate Flag
Issue 03 (2012-11-30)
103
3 ATM OAM
Value
Description
Source
Sink
l UNIs-NNI: PW(type)-VPIVCI
For example: PW(ATM n to
one VCC cell transport,
16385)-10-66
Connection Direction
Source, Sink
Issue 03 (2012-11-30)
104
3 ATM OAM
Field
Value
Description
Loopback Point NE
NE ID or NE LLID
Test Result
Test succeeded
3.10.4 LLID
This section describes the parameters for LLID.
Table 3-10 lists the parameters for LLID.
Table 3-10 Parameters for LLID
Field
Value
Description
Country Code
(Hexadecimal Code)
[2BYTE]00 00-ff ff
Default: [2BYTE]00 00
Issue 03 (2012-11-30)
105
3 ATM OAM
Field
Value
Description
Network Code
(Hexadecimal Code)
[2BYTE]00 00-ff ff
Default: [2BYTE]00 00
[11BYTE]00 00 00 00 00 00 00
00 00 00 00-ff ff ff ff ff ff ff ff ff
ff ff
Issue 03 (2012-11-30)
Field
Value
Description
Enabled, Disabled
Default: Enabled
106
Issue 03 (2012-11-30)
107
4.1 Introduction
This section provides the definition of ATM QoS and describes its purpose.
Definition
ATM QoS is a mechanism provided by the ATM network. With the mechanism, an ATM
network assures services of expected QoS objectives such as amount of bandwidth, delay, delay
variation, and packet loss ratio in whatever situations, so that users can know the expected service
level.
The equipment can provide ATM services with ensured quality levels by applying ATM traffic
control policies.
Purpose
ATM QoS aims to prevent occurrence of congestion in ATM services and improve utilization
of resources.
If ATM services are transmitted in PWE3 mode, service policies with appropriate priorities can
be provided for various categories of ATM services carried by VPCs or VCCs before the ATM
cells are encapsulated into PW packets. As shown in Figure 4-1, ATM QoS generally functions
in the ingress direction of a UNI on a PE.
Figure 4-1 Functioning point of ATM QoS
Functioning point of
ATM traffic control
ATM cells
Ingress
PSN
CE
PE
108
Full Name
Description
PCR
SCR
Sustained cell
rate
CDVT
Cell delay
variation
tolerance
MBS
Maximum
burst size
MCR
Issue 03 (2012-11-30)
Minimum Cell
Rate
109
Full
Name
Applicat
ion
Example
ATM
Traffic
Paramete
r
Description
CBR
Constant
bit rate
Voice,
CBR
video, and
circuit
emulation
PCR and
CDVT
Issue 03 (2012-11-30)
UBR
Unspecifi
ed bit rate
LAN
emulation
, IP over
ATM, and
unspecifie
d traffic
(such as
file
transfer
and email)
PCR and
CDVT
The UBR service is intended for non-realtime applications that allow many bursts.
The UBR service does not specify traffic
related service guarantees. Instead, the
UBR service only requires that the
network side provide the service with the
best effort (BE) guarantee, and the
network side does not provide any
guarantee for the UBR service. In the case
of network congestion, the UBR cells are
discarded first.
UBR+
Unspecifi
ed bit rate
plus
LAN
emulation
, IP over
ATM, and
unspecifie
d traffic
(such as
file
transfer
and email)
PCR,
CDVT,
and MCR
110
ATM
Service
Catego
ry
Full
Name
Applicat
ion
Example
ATM
Traffic
Paramete
r
Description
RTVBR
Real-time
variable
bit rate
Voice and
VBR
video
PCR,
SCR,
CDVT,
and MBS
NRTVBR
Non-realtime
variable
bit rate
Packet
transmissi
on,
terminal
meeting,
and file
transfer
PCR,
SCR, and
MBS
Traffic
Parameter 1
Traffic
Parameter 2
Traffic
Parameter 3
Traffic
Parameter 4
UBR
NoTrafficDescriptor
NoClpTaggingNoScr
Clp01Pcr
CDVT
NoClpNoScr
Clp01Pcr
NoClpNoScrCdvt
Clp01Pcr
CDVT
ClpTransparentNoScr
Clp01Pcr
CDVT
ClpNoTaggingNoScr
Clp01Pcr
Clp0Pcr
CBR
Issue 03 (2012-11-30)
111
ATM
Service
Category
NRT-VBR
RT-VBR
UBR+
Traffic
Parameter 1
Traffic
Parameter 2
Traffic
Parameter 3
Traffic
Parameter 4
ClpTaggingNoScr
Clp01Pcr
Clp0Pcr
NoClpNoScr
Clp01Pcr
NoClpNoScrCdvt
Clp01Pcr
CDVT
NoClpScr
Clp01Pcr
Clp01Scr
MBS
ClpNoTaggingScr
Clp01Pcr
Clp0Scr
MBS
ClpTaggingScr
Clp01Pcr
Clp0Scr
MBS
ClpTransparentScr
Clp01Pcr
Clp01Scr
MBS
CDVT
NoClpScrCdvt
Clp01Pcr
Clp01Scr
MBS
CDVT
ClpNoTaggingScrCdvt
Clp01Pcr
Clp0Scr
MBS
CDVT
ClpTaggingScrCdvt
Clp01Pcr
Clp0Scr
MBS
CDVT
AtmNoTrafficDescriptorMcr
Clp01Mcr
AtmNoClpMcr
Clp01Pcr
Clp01Mcr
AtmNoClpMcrCdvt
Clp01Pcr
Clp01Mcr
CDVT
NOTE
CAC
The CAC function is used to determine whether a connection can be progressed or should be
rejected. A connection request is progressed only when sufficient resources are available, in
order to maintain the agreed QoS of established connections.
The equipment determines whether a connection can be progressed or should be rejected
according to the following principle: The SCR sum of all the connections does not exceed the
maximum service access bandwidth of the equipment.
UPC/NPC
UPC/NPC is defined as the set of actions taken to monitor traffic and enforce the traffic contract.
Its main purpose is to monitor the cells received on ATM virtual connections according to
negotiated traffic parameters, thus to avoid possible network congestion.
Issue 03 (2012-11-30)
112
Cell passing
The UPC function considers that the cell conforms to the negotiated traffic contract and
therefore allows the cell to pass.
Cell tagging
Cell tagging is operated for CLP = 0 cells only, by converting CLP = 0 into CLP = 1. The
UPC function considers that the tagged cell conforms to the negotiated traffic contract and
therefore allows the cell to pass. In the case of network congestion, however, the tagged
cell is discarded.
Cell discarding
The UPC function considers that the cell violates conformance to the negotiated traffic
contract and therefore discards the cell.
EPD/PPD
l
EPD
If the EPD function is enabled, the system always monitors the status of the cell buffer.
Once the network is detected congested, the system discards all the cells in the next ATM
Adaptation Layer Type 5 (AAL5) packet. The EPD function is effective for congestion
prevention because cell discarding is performed at the packet level. The system, however,
cannot selectively discard packets because the EPD function cannot differentiate a good
packet from a bad packet.
PPD
The system starts to discard an ATM cell and all the following cells in the same AAL5
packet, to reduce the load on the network if all the following conditions are met: (1) The
PPD function is enabled; (2) AAL5 packets are used to carry ATM cells; (3) Network
congestion occurs or the transmit traffic volume exceeds the amount of allocated
bandwidth. The AAL5 packet, however, may fail to be restored and results in a bad packet
due to loss of an ATM cell. Therefore, the AAL5 packet needs to be retransmitted.
IETF RFC 2514: Definitions of Textual Conventions and Object-identities for ATM
Management
4.4 Availability
This section describes the support required by the application of the ATM QoS feature.
Issue 03 (2012-11-30)
113
Version Support
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Applicable Board
Applicable Version
Applicable Equipment
TNN1AFO1
TNN1D75E
TNN1D12E
N1MD12
N1MD75
Issue 03 (2012-11-30)
Applicable
Object
Remarks
ATM policy
ATM CoS
mapping table
114
Applicable
Object
Remarks
Pcr/Scr
Maximum cell
burst size and
cell delay
variation
tolerance
NOTE
Setting the uplink policy of
TNN1AFO1:
l The CDVT value needs to be set to
meet CDVT<=1/PCR.
l The MBS value needs to be set to
meet MBS<=SCR.
Remarks
ATM CoS
mapping table
ATM policy
Maintenance Principles
None.
4.6 Principles
ATM traffic management is achieved by using the generic cell rate algorithm (GCRA).
115
inter-arrival time. The variation can be a negative value (arrival ahead of schedule) or a positive
value (delayed arrival). If a cell arrives ahead of schedule, however, the variation must not exceed
the preset CDVT. Otherwise, the GCRA considers that the cell violates conformance to the
contract.
Figure 4-2 uses an example to describe how the equipment performs traffic management on cell
2 in four arrival scenarios based on the principles of the GCRA. Assume the following
conditions: The contracted maximum traffic rate is PCR; the UPC/NPC function is enabled;
variable T is equal to 1/PCR, that is, a cell is transmitted to the network at an interval not shorter
than T; cell 1 arrives at time t1; cell 2 is expected to arrive at time t2 (t2 = t1 + T).
Figure 4-2 Illustration of basic GCRA principles
T
T
L
1
Cell 1
1
2
t2
t1
t1
e
3
1
t1
1
t1
t2 + d
Cell 2 arrives very early and the variation exceeds the preset
CDVT, and therefore the equipment discards or remarks the cell.
Cell 3 is expected to arrive at time t2.
Issue 03 (2012-11-30)
Cell Arrival
Description
UPC/NPC
Operation
Normal
(arrival
on
schedule
)
Cell passing
Normal
(delayed
arrival)
Cell passing
116
Cell Arrival
Description
UPC/NPC
Operation
Cell passing
Burst
(arrival
ahead of
schedule
)
Violatio
n
Cell tagging or
discarding
In normal cases (for example, scenario 1 or scenario 2 in Figure 4-2), cell 2 uses capacity
T after arrival. When cell 3 arrives T after the arrival of cell 2, cell 2 leaks from the bucket
and the used capacity of the bucket remains T.
In the case of cell bursts (for example, scenario 3 in Figure 4-2), cell 2 arrives e early.
When cell 2 arrives, cell 1 partially leaks from the bucket and therefore a cell capacity e
remains in the bucket. In this case, the used capacity of the bucket is T + e. The leaky bucket
algorithm performs strict control and adjusts the expected arrival time of cell 3 to time t2
+ T (as specified in Table 4-4). As a result, if the arrival time of the subsequent n cell(s)
is normal (that is, the arrival time of the cells is not greater than t2 + n x T, where n = 1, 2,
3, ...), these cells can completely leak from the bucket; if bursts occur in the subsequent
cells (that is, the arrival time of certain cells is smaller than t2 + n x T), the used capacity
of the bucket gradually increases. When the used capacity of the bucket exceeds T + L,
overflow cells are tagged or discarded.
Issue 03 (2012-11-30)
117
Level-1 bucket
Contract
conformance
violation cell
Level-2 bucket
Contract
conformance
violation cell
The ATM traffic management sequence by using the two-level leaky bucket algorithm is
different for different chips or service types, as shown in Table 4-5.
Issue 03 (2012-11-30)
118
Table 4-5 ATM traffic management sequence by using the two-level leaky bucket algorithm
ATM traffic
management
sequence by
using the twolevel leaky
bucket
algorithm
ATM
Servic
e
Catego
ry
UBR
CBR
Issue 03 (2012-11-30)
ATM Traffic
Category
Descriptor
Traffic
Param
eter 1
Traffic
Param
eter 2
Traffic
Param
eter 31
Traffic
Param
eter 4
TNN1
AFO1
TNN1
D75E,
TNN1
D12E,
N1MD
75,
N1MD
12
NoTrafficDescriptor
NoClpTaggingNoScr
Clp01P
cr
CDVT
NoClpNoScr
Clp01P
cr
NoClpNoScrCd
vt
Clp01P
cr
CDVT
ClpTransparent
NoScr
Clp01P
cr
CDVT
ClpNoTaggingNoScr
Clp01P
cr
Clp0Pcr
ClpTaggingNo
Scr
Clp01P
cr
Clp0Pcr
At level-1 bucket,
Clp01 is processed
based on Clp01Pcr,
and improper cells
are discarded.
At level-2 bucket,
Clp0 is processed
based on Clp0Pcr,
and improper cells
are discarded.
NoClpNoScr
Clp01P
cr
NoClpNoScrCd
vt
Clp01P
cr
CDVT
119
ATM traffic
management
sequence by
using the twolevel leaky
bucket
algorithm
ATM
Servic
e
Catego
ry
NRTVBR
ATM Traffic
Category
Descriptor
Traffic
Param
eter 1
Traffic
Param
eter 2
Traffic
Param
eter 31
Traffic
Param
eter 4
NoClpScr
Clp01P
cr
Clp01S
cr
MBS
ClpNoTagging
Scr
Issue 03 (2012-11-30)
Clp01P
cr
Clp0Scr
MBS
TNN1
AFO1
TNN1
D75E,
TNN1
D12E,
N1MD
75,
N1MD
12
At
level-1
bucket,
Clp01 is
process
ed
based
on
Clp01P
cr, and
imprope
r cells
are
discarde
d.
At
level-1
bucket,
Clp01 is
process
ed
based
on
Clp01S
cr, and
imprope
r cells
are
discarde
d.
At
level-2
bucket,
Clp01 is
process
ed
based
on
Clp01S
cr, and
imprope
r cells
are
discarde
d.
At
level-2
bucket,
Clp01 is
process
ed
based
on
Clp01P
cr, and
imprope
r cells
are
discarde
d.
At
level-1
bucket,
Clp01 is
process
At
level-1
bucket,
Clp0 is
process
120
ATM traffic
management
sequence by
using the twolevel leaky
bucket
algorithm
ATM
Servic
e
Catego
ry
RTVBR
Issue 03 (2012-11-30)
TNN1
AFO1
TNN1
D75E,
TNN1
D12E,
N1MD
75,
N1MD
12
ed
based
on
Clp01P
cr, and
imprope
r cells
are
discarde
d.
At
level-2
bucket,
Clp0 is
process
ed
based
on
Clp0Scr
, and
imprope
r cells
are
discarde
d.
ed
based
on
Clp0Scr
, and
imprope
r cells
are
discarde
d.
At
level-2
bucket,
Clp01 is
process
ed
based
on
Clp01P
cr, and
imprope
r cells
are
discarde
d.
CDVT
At
level-1
bucket,
Clp01 is
process
ed
based
At
level-1
bucket,
Clp01 is
process
ed
based
ATM Traffic
Category
Descriptor
Traffic
Param
eter 1
Traffic
Param
eter 2
Traffic
Param
eter 31
Traffic
Param
eter 4
ClpTaggingScr
Clp01P
cr
Clp0Scr
MBS
ClpTransparentScr
Clp01P
cr
Clp01S
cr
MBS
121
ATM traffic
management
sequence by
using the twolevel leaky
bucket
algorithm
ATM
Servic
e
Catego
ry
Issue 03 (2012-11-30)
TNN1
AFO1
TNN1
D75E,
TNN1
D12E,
N1MD
75,
N1MD
12
CDVT
on
Clp01P
cr, and
imprope
r cells
are
discarde
d.
At
level-2
bucket,
Clp01 is
process
ed
based
on
Clp01S
cr, and
imprope
r cells
are
discarde
d.
on
Clp01S
cr, and
imprope
r cells
are
discarde
d.
At
level-2
bucket,
Clp01 is
process
ed
based
on
Clp01P
cr, and
imprope
r cells
are
discarde
d.
CDVT
At
level-1
bucket,
Clp01 is
process
ed
based
At
level-1
bucket,
Clp0 is
process
ed
based
ATM Traffic
Category
Descriptor
Traffic
Param
eter 1
Traffic
Param
eter 2
Traffic
Param
eter 31
Traffic
Param
eter 4
NoClpScrCdvt
Clp01P
cr
Clp01S
cr
MBS
ClpNoTaggingScrCdvt
Clp01P
cr
Clp0Scr
MBS
122
ATM traffic
management
sequence by
using the twolevel leaky
bucket
algorithm
ATM
Servic
e
Catego
ry
UBR+
Issue 03 (2012-11-30)
TNN1
AFO1
TNN1
D75E,
TNN1
D12E,
N1MD
75,
N1MD
12
CDVT
on
Clp01P
cr, and
imprope
r cells
are
discarde
d.
At
level-2
bucket,
Clp0 is
process
ed
based
on
Clp0Scr
, and
imprope
r cells
are
discarde
d.
on
Clp0Scr
, and
imprope
r cells
are
discarde
d.
At
level-2
bucket,
Clp01 is
process
ed
based
on
Clp01P
cr, and
imprope
r cells
are
discarde
d.
At level-1 bucket,
Clp01 is processed
based on
Clp01Mcr, and
ATM Traffic
Category
Descriptor
Traffic
Param
eter 1
Traffic
Param
eter 2
Traffic
Param
eter 31
Traffic
Param
eter 4
ClpTaggingScrCdvt
Clp01P
cr
Clp0Scr
MBS
AtmNoTraffic
DescriptorMcr
Clp01
Mcr
AtmNoClpMcr
Clp01P
cr
Clp01
Mcr
123
ATM traffic
management
sequence by
using the twolevel leaky
bucket
algorithm
ATM
Servic
e
Catego
ry
ATM Traffic
Category
Descriptor
Traffic
Param
eter 1
Traffic
Param
eter 2
Traffic
Param
eter 31
Traffic
Param
eter 4
AtmNoClpMcr
Cdvt
Clp01P
cr
Clp01
Mcr
CDVT
TNN1
AFO1
TNN1
D75E,
TNN1
D12E,
N1MD
75,
N1MD
12
NOTE
Prerequisites
You must be an NM user with NE operator authority or higher.
Context
The OptiX OSN 7500 II equipment supports the flexible mapping between the ATM service
type and PHB service class. Table 4-6 lists the default ATM CoS mapping relation.
Issue 03 (2012-11-30)
124
Table 4-6 Mapping relation between the ATM service type and PHB service class
ATM Service Category
CBR
EF
RT-VBR
AF31
NRT-VBR
AF21
UBR+
AF11
UBR
BE
PORT-TRANS
BE
NOTE
Procedure
Step 1 In the NE Explorer, click the NE and choose Configuration > Packet Configuration > QoS
Management > Diffserv Domain Management > ATM CoS Mapping Configuration from
the Function Tree.
Step 2 Click Create. The Create ATM CoS Mapping Relation dialog box is displayed.
Step 3 Set the following parameters in the dialog box.
l Mapping Relation ID
l Mapping Relation Name
l Mapping between the Service Type and PHB
125
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Packet Configuration > QoS
Management > Policy Management > ATM Policy from the Function Tree.
Step 2 Click New. The Create ATM Policy dialog box is displayed.
Step 3 Set the following parameters in the dialog box.
l Policy ID and name
NOTE
Select one from the drop-down list that contains five levels of ATM service policies. Alternatively, enter a
policy name.
l Service type
l Traffic parameter
l Frame discarding label
l UPC/NPC
NOTE
The service encapsulated in AAL5 is sliced into cells. Enable Traffic Frame Discarding Flag determines
whether to discard cells or to discard the complete AAL5 frames for the cells. When Enable Traffic Frame
Discarding Flag is set to Enabled, the equipment will discard the complete AAL5 frames for the cells.
The Scr, Pcr, MBS, and CDVT bucket parameters are functional only when UPC/NPC is enabled for the ATM
policy.
Issue 03 (2012-11-30)
126
NE5
GE ring on
access layer
NE6
NE1
pw1
pw2
10 GE ring
on
convergence
layer
NE4
NE3
ATM
STM-1
NE2
pw3
IMA1
IMA2
RNC
Working Tunnel
NodeB 1
UNI
IMA1:
Connection 1
Connection 2
Connection 3
R99
HSDPA
Signaling
VPI
1
1
1
R99
HSDPA
Signaling
VPI
1
1
1
NNI
IMA2:
Issue 03 (2012-11-30)
Protection Tunnel
NNI
UNI
VCI
100
101
102
VPI
VCI
VPI
VCI
VPI
VCI
50
32
50
32
50
32
51
52
32
32
51
52
32
32
51
52
32
32
VCI
100
101
102
VPI
VCI
VPI
VCI
VPI
VCI
60
32
60
32
60
32
61
62
32
32
61
62
32
32
61
62
32
32
UNI
Connection 1
Connection 2
Connection 3
PW
NodeB 2
NNI
NNI
UNI
127
Figure 4-5 lists the service types of IMA traffic and QoS requirements.
Table 4-7 Service types and QoS requirements
Application Scenario
ATM Policy
PW
Bandw
idth
Tunnel Bandwidth
l Policy ID: 1
Bandwi
dth: 4
Mbit/s
30 Mbit/s
l Policy ID: 2
l Policy name: CBR
l Service type: CBR
Bandwi
dth: 1
Mbit/s
l Traffic type:
NoClpNoScr
l Clp01Pcr(cell/s): 800
l Enable Traffic Frame
Discarding Flag:
Disable
l UPC/NPC: Disable
Data service, which is
carried by the UBR type.
l Policy ID: 3
l Policy name: UBR
l Service type: UBR
Bandwi
dth: 15
Mbit/s
l Traffic type:
NoTrafficDescriptor
l Enable Traffic Frame
Discarding Flag:
Disable
l UPC/NPC: Disabled
Issue 03 (2012-11-30)
128
Prerequisites
l
Procedure
Step 1 Configure the RT-VBR policy.
1.
In the NE Explorer, select NE1 and choose Configuration > Packet Configuration > QoS
Management > Policy Management > ATM Policy.
2.
Click New, and set the parameters in the Create ATM Policy dialog box displayed. Click
OK.
Set the parameters for the RT-VBR policy as follows:
l Policy ID: 1
l Policy name: RT-VBR
l Service type: RT-VBR (Select the service type based on the type of incoming service.
Specifically, the voice service corresponds to RT-VBR, which is of the highest service
priority.)
l Traffic type: ClpNoTaggingScrCdvt (This parameter indicates that the two-level bucket
is valid. At one level, CLP01 is processed based on Clp01Pcr, and improper cells are
discarded. At the other level, CLP0 is processed based on Clp0Scr, and improper cells
are discarded.)
l Clp01Pcr(cell/s): 4000 (This parameter indicates the maximum permitted rate at which
cells are transmitted. Set the parameter based on the service rate.)
l Clp0Scr(cell/s): 1000 (This parameter indicates the sustainable cell rate at which cells
are transmitted. Set the parameter based on the service rate.)
l MBS(cell): 100 (This parameter indicates the maximum number of burst cells tolerated
by the equipment after the traffic exceeds Clp01Pcr.)
l CDVT(us): 10000 (This parameter indicates the capability of tolerating burst ATM
cells. When a cell arrives later than it is expected, the cell bursts. The burst volume is
measured by the time difference between the actual arrival time and expected arrival
time. If the burst volume of multiple consecutive cells exceeds the CDVT, the equipment
discards the improper cells. Set the parameter to a large value, when possible, to
minimize the number of lost packets.)
l Enable Traffic Frame Discarding: Disabled (The service encapsulated in AAL5 is sliced
into cells. This parameter specifies whether to discard cells or to discard the complete
AAL5 frames for the cells.)
l UPC/NPC: Enable (UPC and NPC stand for user parameter control and network
parameter control respectively. The bucket algorithm is based on the content of UPC/
NPC. UPC/NPC is intended to monitor the cells received by the equipment based on
the specified traffic parameters, and thus to avoid network congestion. The Scr, Pcr,
MBS, and CDVT bucket parameters are functional only when UPC/NPC is enabled for
the ATM policy.)
Issue 03 (2012-11-30)
129
3.
Click OK.
4.
In the NE Explorer, select NE3 and repeat the previous steps to create the ATM policies
for NE3. Set the policy-related parameters of NE3 as the same as the policy-related
parameters of NE1.
In the NE Explorer, select NE1 and choose Configuration > Packet Configuration > QoS
Management > Policy Management > ATM Policy in the Function Tree.
2.
Click New, set the parameters in the Create ATM Policy dialog box. Click OK.
Set the parameters for the CBR policy as follows:
l Policy ID: 2
l Policy name: CBR
l Service type: CBR (Select the service type based on the type of incoming service.
Specifically, the signal service corresponds to the CBR service, which is of the highest
service priority.)
l Traffic Type: NoClpNoScr (The bucket at the first level takes effect, processes the cells
with the Clp01 flags based on Clp01Pcr, and discards the cells without the Clp01 flags.)
l Clp01Pcr(cell/s): 800
l Enable Traffic Frame Discarding: Disabled
l UPC/NPC: Enable
3.
Click OK.
4.
In the NE Explorer, select NE3 and repeat the previous two steps to create the ATM policies
for NE3. Set the policy-related parameters of NE3 as the same as the policy-related
parameters of NE1.
In the NE Explorer, select NE1 and choose Configuration > Packet Configuration > QoS
Management > Policy Management > ATM Policy in the Function Tree.
2.
Click New, and set the parameters in the Create ATM Policy dialog box. Click OK.
Set the parameters for the UBR policy as follows:
l Policy ID: 3
l Policy name: UBR (Select the service type based on the type of incoming service.
Specifically, the data service corresponds to the UBR service, which is of the lowest
service priority.)
l Service type: UBR (No QoS is ensured for the accessed service. In the case of network
congestion, the UBR cells are discarded at the earliest.)
l Traffic type: UBR (No QoS is ensured for the accessed service. In the case of network
congestion, the UBR cells are discarded at the earliest.)
l Enable Traffic Frame Discarding: Disabled
l UPC/NPC: Disabled
3.
Click OK.
4.
In the NE Explorer, select NE3 and repeat the previous steps to create the ATM policies
for NE3. Set the policy-related parameters of NE3 as the same as the policy-related
parameters of NE1.
Issue 03 (2012-11-30)
130
DefaultAtmCosMap is the default ATM CoS mapping table of the equipment and cannot be deleted. In this
example, the default CoS mapping table is used. The user can also create an ATM CoS mapping table as
required. For details on how to create an ATM CoS mapping table, see Configuring ATM CoS Mapping.
NOTE
For the ATM service configuration method, see Configuration Example (UNIs-NNI ATM Services).
----End
Issue 03 (2012-11-30)
Field
Value
Description
PW ID
131
Field
Value
Description
CoS Mapping
Default: 1(DefaultAtmCosMap)
Value
Description
Mapping Relation ID
2 to 8
Default: none
NOTE
1 is the default ID for the ATM
service class mapping table.
UBR
CBR
RT-VBR
NRT-VBR
UBR+
Issue 03 (2012-11-30)
132
Field
Value
Description
l CS6 to CS7: highest service
classes, mainly applicable to
signaling transmission.
l EF: fast forwarding,
applicable to services of low
transmission delays and low
packet loss rates.
l AF1 to AF4: assured
forwarding, applicable to
services that require an
assured transmission rate
rather than delay or jitter
limits.
PORT-TRANS
NOTE
The AF1 class includes three
subclasses: AF11, AF12, and
AF13. Only one of these
subclasses can take effect for
one queue. It is the same case
with AF2, AF3, and AF4.
Value
Description
Policy ID
1 to 256
Default: 1
Issue 03 (2012-11-30)
133
Field
Value
Description
Policy Name
NOTE
You can select one of the five ATM
service policy names from the dropdown list or enter the policy name.
Issue 03 (2012-11-30)
134
Field
Value
Description
Service Type
Default: UBR
Issue 03 (2012-11-30)
135
Field
Value
Description
applications that require
assured minimum cell rate,
which is indicated by the
minimum cell rate (MCR)
parameter. The other
characteristics of the UBR+
service are the same as the
corresponding characteristics
of the UBR service.
Set this parameter according to
the planning information.
Traffic Service
Clp01Pcr(cell/s)
90-149078
Default: None.
Clp01Scr(cell/s)
90-149078
Default: None.
Issue 03 (2012-11-30)
136
Field
Value
Description
Clp0Pcr(cell/s)
90-149078
Default: None.
90-149078
Default: None.
Clp01Mcr(cell/s)
566-32664
Default: None.
2-200000
Default: None.
Issue 03 (2012-11-30)
137
Field
Value
Description
7 to 13300000
Default: None.
Enabled, Disabled
Default: Disabled
UPC/NPC
Disabled, Enabled
Default: Disabled
Issue 03 (2012-11-30)
138
5 IMA
IMA
139
5 IMA
Issue 03 (2012-11-30)
140
5 IMA
5.1 Introduction
The inverse multiplexing for ATM (IMA) technology multiplexes multiple low-speed ATM
links into a high-speed link, providing protection for services when one of the low-speed ATM
links fail.
Definition
Specifically, the IMA technology provides inverse multiplexing of an ATM cell stream over
multiple low-speed links and retrieves the original stream at the far-end from these physical
links.
Figure 5-1 IMA
IMA group
IMA group
PHY
PHY
Link 0
PHY
PHY
Link 1
PHY
Link 2
PHY
TRUNK link
The IMA technology helps to group multiple physical links to form a higher bandwidth logical
link whose rate is approximately the sum of the link rates. When the member links in the IMA
group are dynamically added/deleted, or fail/recover, the logical link changes only in bandwidth.
The services on the logical link are not interrupted only if the bandwidth of the logical link is
not lower than the required minimum bandwidth.
When a link in an IMA group fails, the cells carried by the link are distributed to other normal
links. In this manner, the IMA services are protected.
Purpose
With the IMA technology, the transport network can transmit ATM services from customer
equipment on an IMA group formed by multiple low-speed links (for example, the three E1 links
shown in Figure 5-2), therefore increasing link bandwidth and providing link protection.
Issue 03 (2012-11-30)
141
5 IMA
IMA group
E1 link
NodeB
Packet
transmission equipment
Management plane
functions
ATM
layer
IMA specific
TC sublayer
Physical
layer
IMA connectivity
ICP cell errors (OIF)
LIF/LODS/RDI-IMA defect
processing
RDI-IMA alarm generation
Tx/Rx IMA link state report
Interface
specific TC
sublayer
Physical
medium
dependent
sublayer
142
5 IMA
Filler Cell
Filler cells are used to stuff IMA frames when no ATM cell arrives at the ATM layer.
Table 5-1 provides the format definition of the filler cell.
Table 5-1 Filler cell format
Octet
Label
Comment
1-5
ATM cell
header
OAM label
Cell ID
Bit 7: IMA OAM cell type (0: filler cell; 1: ICP cell)
Link ID
8-51
Unused
52-53
CRC error
control
ICP Cell
ICP cells are used to communicate information for setting up the IMA protocol between two
IMA units.
Table 5-2 provides the format definition of the ICP cell.
Table 5-2 ICP cell format
Octet
Label
Comment
1-5
ATM cell
header
OAM label
Cell ID
Bit 7: IMA OAM cell type (1: ICP cell; 0: filler cell)
Link ID
Issue 03 (2012-11-30)
IMA frame
sequence
number (IFSN)
143
5 IMA
Octet
Label
Comment
Range: 0 to M-1
Indicates position of ICP cell within the IMA frame. M
indicates the length of the IMA frame.
10
Link stuff
indication
(LSI)
11
Status and
control change
indication
(SCCI)
12
IMA ID
13
IMA group
status and
control
Issue 03 (2012-11-30)
144
5 IMA
Octet
Label
Comment
14
Transmit
timing
information
Tx test control
15
Tx test pattern
17
Rx test pattern
18
Link 0
information
19-49
Link 1-31
information
50
Unused
51
End-to-end
channel
52-53
CRC error
control
IMA Frame
The IMA frame is used as the unit of control in the IMA protocol. An IMA frame is defined as
M consecutive cells (numbered 0 to M-1) on each link, across the linksa in an IMA group. M is
called the length of an IMA frame.
NOTE
a: "Across the links" in an IMA group is the same as the mechanism in which the transmit end distributes cells
from link to link within an IMA group. That is, ATM cells are placed on each link in a circulating manner.
One of the M cells on each link within an IMA group is an ICP cell; that is, the ICP cell is sent
once on each link per IMA frame. The ICP cell may be at different positions on different links.
In addition to ICP cells, each IMA frame has filler cells and ATM layer cells. The filler cells
are used to fill IMA frames when no ATM cell is received, and are discarded at the receive end.
The IFSN field in the ICP cell, indicating the sequence number of the IMA frame, increments
from 0 to 255 and repeats the sequence. Within an IMA frame period, the ICP cells on all the
links have the same IFSN value.
Issue 03 (2012-11-30)
145
5 IMA
Figure 5-4 shows an example of the transmission of IMA frames over three links in an IMA
group. In IMA frame 0, the IFSN of the ICP cell is 0; in IMA frame 1, the IFSN of the ICP cell
is 1; in IMA frame 2, the IFSN of the ICP cell is 2.
Figure 5-4 Transmission of IMA frames over three links in an IMA group
IMA frame 2
ATM
M-1
IMA frame 1
F
ICP2
M-1
...
IMA frame 0
ATM
ICP1 ATM
0
M-1
ATM
ATM
...
...
ATM
ICP0
...
Link 0
Link 1
Link 2
Time
ICP1
F
ATM
1# ICP cell
Filler cell
ATM layer cell
The transmit end allocates the ICP cell offsets of 1, 0, and 2 to links 0, 1, and 2 respectively.
During transmission, link 0 and link 2 have the same amount of propagation delay but link
1 has a delay one cell time longer than link 0 and link 2.
Issue 03 (2012-11-30)
146
5 IMA
At the receive end, the cells from each link enter the delay compensation buffer (DCB). In
the DCB, the ICP cell of each link obtains its offset based on the IMA frame header. In this
manner, the entire IMA frame is aligned. In this example, the offset of ICP1 is 0. Therefore,
ICP1 in the DCB is moved to the frame header so that the entire IMA frame is aligned.
Transmit end
...
ICP0
...
ICP0
...
ICP0
...
ICP
1
...
ICP2
Link 0
...
ICP1
...
ICP1
Link 1
Delay
...
ICP2
...
ICP2
2
Link 2
t4
t3
t2
t1
Delay compensation buffer
Time
147
5 IMA
member links are co-sourced. In this manner, the transmit end triggers a filler event to match
the rate of each link with a fixed interval of 2048 ATM cells.
Figure 5-6 IMA timing configuration reference model
Tx clock unit
Local oscillator
Sw
External source
Sw (0)
Tx
Rx
Sw (1)
Tx
Rx
0
Link 0
. . .
Link 1
...
Sw (N-1)
Tx
0
1
Rx
Link N-1
5.4 Availability
The IMA function requires the support of the applicable equipment, boards, and software.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
U2000
148
5 IMA
Applicable Equipment
Applicable Version
U2000
Hardware Support
Applicable Board
Applicable Version
Applicable Equipment
TNN1D75E
TNN1D12E
N1MD12
N1MD75
Remarks
IMA
Issue 03 (2012-11-30)
149
5 IMA
Applicable
Object
Remarks
IMA
IMA group
IMA
symmetric
mode
The TNN1D75E/TNNID12E/
N1MD75/N1MD12 supports only
symmetrical mode and symmetrical
operation.
Maintenance Principles
None.
Issue 03 (2012-11-30)
150
5 IMA
5.6 Principles
This section describes the principles of the IMA feature.
Figure 5-7 shows the processing of an ATM cell stream in the IMA technology.
l
The IMA transmitter sends the ATM user cells and ICP cells from the ATM layer to the
active member links within an IMA group. When the ATM user cells need to be transmitted
but no cell is received from the ATM layer, filler cells are transmitted to maintain a
continuous stream of IMA cells.
The IMA receiver retrieves the original ATM cell stream by using the link differential delay
compensation technique and then transmits the ATM cell stream to the ATM layer.
When a member link in the IMA group is faulty, the transmitter and receiver transmit the
new IMA frame wherein the member link is removed. In this manner, link protection is
achieved.
IMA group
PHY
PHY
Link 0
PHY
PHY
Link 1
PHY
Link 2
PHY
TRUNK link
Issue 03 (2012-11-30)
151
5 IMA
Operation
Description
Required.
l For ATM/IMA services, set Level to E1. For
Fractional E1 services, set Level to Fractional
E1.
l Set the other parameters according to the
planning information.
NOTE
When the E1 frame mode is PCM 30, timeslot 16 cannot
be bound.
Required.
l For an ATM trunk requiring the IMA function,
set IMA Protocol Enable Status to
Enabled.
l Set Clock Mode of the local NE and the NE
at the opposite end of the IMA trunk to be the
same as Clock Mode of the interconnected
NodeB.
l The other parameters are only valid for IMA
E1 and Fractional IMA. The parameters at
both ends of an IMA link must be set to the
same, which are recommended to set to default
values.
Optional.
l It is recommended that you set Port Type and
ATM Cell Payload Scrambling by default
based on the type of access equipment. The
parameter values must be the same for both
ends of a link.
l The other parameters take the default values.
Prerequisites
l
Issue 03 (2012-11-30)
152
5 IMA
For Fractional ATM/IMA ports, set Port Mode in PDH Interface to Layer 1 and configure
serial port parameters.
Context
NOTE
Only the E1 ports or the serial ports on the same processing board can be bound.
Procedure
Step 1 In the NE Explorer, select the NE from the Object Tree.
Step 2 Choose Configuration > Packet Configuration > Interface Management > ATM IMA
Management from the Function Tree.
Step 3 Click the Binding tab.
Step 4 Click Configuration.
Step 5 The Bound Path dialog box is displayed. Configure the related parameters according to planning
information. Then, click
NOTE
l If ATM/IMA services need to be mapped into the ATM TRUNK that binds one or more E1 ports, select
E1 in Level.
l If ATM/IMA services need to be mapped into the ATM TRUNK that binds one or more serial ports,
select Fractional E1 in Level.
Step 6 Click Apply, and then close the dialog box that is displayed.
----End
Follow-up Procedure
If the IMA group is required, you need to bind the member links of the IMA group with the
ATM TRUNK, enable the IMA protocol for the ATM TRUNK, and then configure the
parameters of the IMA group.
Issue 03 (2012-11-30)
153
5 IMA
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select the NE from the Object Tree.
Step 2 Choose Configuration > Packet Configuration > Interface Management > ATM IMA
Management from the Function Tree.
Step 3 Click the IMA Group Management tab.
Step 4 Optional: Select a ATM trunk. The services associated with the ATM trunk are displayed in
the lower pane.
Step 5 Configure the parameters of the IMA Group Management tab according to planning
information.
Step 6 Click Apply, and then close the dialog box that is displayed.
Step 7 Optional: Click Reset, can reset an IMA group to re-enable the IMA group protocol.
----End
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select the NE from the Object Tree.
Step 2 Choose Configuration > Packet Configuration > Interface Management > ATM IMA
Management from the Function Tree.
Step 3 Click the ATM Interface Management tab.
Step 4 Configure the ATM Interface Management attributes.
Issue 03 (2012-11-30)
154
5 IMA
Step 5 Click Apply, and close the dialog box that is displayed.
----End
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select the NE from the Object Tree.
Step 2 Choose Configuration > Packet Configuration > Interface Management > ATM IMA
Management from the Function Tree.
Step 3 Click the IMA Group States tab.Then, close the dialog box that is displayed.
Step 4 Click Query, and then close the dialog box that is displayed.
Step 5 Query the running status of an IMA group.
----End
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select the NE from the Object Tree.
Step 2 Choose Configuration > Packet Configuration > Interface Management > ATM IMA
Management from the Function Tree.
Step 3 Click the IMA Link States tab.Then, close the dialog box that is displayed.
Step 4 Click Query, and then close the dialog box that is displayed.
Step 5 Query the running status of the member links of an IMA group.
----End
Issue 03 (2012-11-30)
155
5 IMA
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select the NE from the Object Tree.
Step 2 Choose Configuration > Packet Configuration > Interface Management > ATM IMA
Management from the Function Tree.
Step 3 Click the IMA Group Management tab and select the IMA group.
Step 4 Select a ATM trunk.
Step 5 Click Reset, and close the dialog box that is displayed.
----End
Prerequisites
l
Context
CAUTION
If any service is configured and activated at the ATM Trunk port for the IMA group, modification
of the IMA group may interrupt services. Exercise caution for this.
Procedure
Step 1 In the NE Explorer, select the NE from the Object Tree.
Step 2 Choose Configuration > Packet Configuration > Interface Management > ATM IMA
Management from the Function Tree.
Step 3 Click the IMA Group Management tab.
Issue 03 (2012-11-30)
156
5 IMA
Step 4 Configure the parameters of the IMA Group Management tab according to planning
information.
Step 5 Click Apply, and close the dialog box that is displayed.
----End
Prerequisites
l
The IMA Protocol Enable Status of the IMA group must be Disabled.
Context
CAUTION
If any service is configured and activated at the ATM Trunk port for the IMA group, deleting
the IMA group may interrupt the services. Exercise caution for this.
Procedure
Step 1 In the NE Explorer, select the NE from the Object Tree.
Step 2 Choose Configuration > Packet Configuration > Interface Management > ATM IMA
Management from the Function Tree.
Step 3 Click the Banding tab.
Step 4 Select an IMA group. Click Delete, and close the dialog box that is displayed.
----End
157
5 IMA
NodeB
NE1
Description
Available Boards
35-N1D12E
Configuration Ports
35-N1D12E-1(Trunk-1)
Level
E1
Direction
Bidirectional
Available Resources
35-N1D12E-1(PORT-1), 35-N1D12E-2
(PORT-2), 35-N1D12E-3(PORT-3)
Table 5-5 lists the IMA group management parameters planned for NE1.
Table 5-5 IMA group management parameters planned for NE1
Issue 03 (2012-11-30)
Attribute
Description
Enabled
1.1
128
25
Clock Mode
CTC Mode
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
158
5 IMA
Table 5-6 lists the ATM interface parameters planned for NE1.
Table 5-6 ATM interface parameters planned for NE1
Attribute
Description
Name
conn_nodeb_trunk1
Port Type
UNI
Enabled
Prerequisites
l
You must understand the networking, requirements and service planning of the example.
Procedure
Step 1 In the NE Explorer, select the NE1 from the Object Tree.
Step 2 Choose Configuration > Packet Configuration > Interface Management > ATM IMA
Management from the Function Tree.
Step 3 Bind ATM TRUNKs.
1.
2.
Click Configuration.
3.
The Bound Path dialog box is displayed. Configure the related parameters according to
planning information. Then, click
ATM TRUNK.
Issue 03 (2012-11-30)
Attribute
Description
Available Boards
35-N1D12E
Configuration Ports
35-N1D12E-1(Trunk-1)
Level
E1
Direction
Bidirectional
Available Resources
35-N1D12E-1(PORT-1), 35-N1D12E-2
(PORT-2), 35-N1D12E-3(PORT-3)
159
4.
5 IMA
Click Apply, and then close the dialog box that is displayed.
2.
Configure the parameters of the IMA Group Management tab according to planning
information.
3.
Attribute
Description
Enabled
1.1
128
25
Clock Mode
CTC Mode
Click Apply, and then close the dialog box that is displayed.
2.
3.
Attribute
Description
Name
conn_nodeb_trunk1
Port Type
UNI
Enabled
----End
160
5 IMA
Table 5-7 lists the alarms that are related to the IMA.
Table 5-7 Alarms related to IMA
Issue 03 (2012-11-30)
Alarm Name
Meaning
IMA_GROUP_RE_DO
WN
IMA_GROUP_LE_DO
WN
ALM_IMA_LINK_LC
D
ALM_IMA_LODS
IMA_TXCLK_MISMATCH
ALM_IMA_RFI
ALM_IMA_LIF
ALM_IMA_RE_RX_U
NUSABLE
ALM_IMA_RE_TX_U
NUSABLE
LFA
ALM_E1AIS
161
5 IMA
Alarm Name
Meaning
LMFA
Value
Description
Available
Boards
Configuratio
n Ports
Issue 03 (2012-11-30)
162
5 IMA
Field
Value
Description
Level
E1, Fractional E1
Default: E1
Direction
Bidirectional
Optical
Interface
Available
Resources
Selected
Bound Paths
Display in
Combination
Default: Selected
Issue 03 (2012-11-30)
163
5 IMA
Field
Value
Description
VCTRUNK
E1, Fractional E1
Bound Paths
Example: 4
Value
Description
VCTRUNK
Enabled, Disabled
Default: Disabled
Issue 03 (2012-11-30)
164
5 IMA
Field
Value
Description
Minimum
Number of
Active
Transmitting
Links
1-16
Default: 1
Issue 03 (2012-11-30)
165
5 IMA
Field
Value
Description
Minimum
Number of
Active
Receiving
Links
1-16
Default: 1
1.0, 1.1
Default: 1.1
Issue 03 (2012-11-30)
166
5 IMA
Field
Value
Description
IMA Transit
Frame
Length
Default: 128
Issue 03 (2012-11-30)
167
5 IMA
Field
Value
Description
Maximum
Delay
Between
Links(ms)
1-120
Default: 25
Value
Issue 03 (2012-11-30)
Description
Displays the type of the service associated with
the port.
168
5 IMA
Field
Value
Description
Service ID
For example: 20
Service
Name
Used
Resource
Value
Description
Port
Example: Port 1
Port Type
UNI, NNI
Default: UNI.
Issue 03 (2012-11-30)
169
5 IMA
Field
Value
Description
ATM Cell
Payload
Scrambling
Enabled, Disabled
Default: Enabled
Max. VPI
Min. VCI
Max. VCI
VCCSupported
VPI Count
Loopback
No Loopback, Outloop,
Inloop
Default: No Loopback
Issue 03 (2012-11-30)
170
5 IMA
Value
Description
VCTRUNK
Start-Up, Start-Up-ACK,
Config-Aborted,
Insufficient-Links,
Operational
Default: Start-Up
Far-End
Group Status
Start-Up, Start-Up-ACK,
Config-Aborted,
Insufficient-Links,
Operational
Default: Start-Up
Transmit
Rate (cell/s)
Receive Rate
(cell/s)
Number of
Transmit
Links
Number of
Receive
Links
Number of
Activated
Transmit
Links
Number of
Activated
Receive
Links
Issue 03 (2012-11-30)
171
5 IMA
Value
Description
VCTRUNK
Displays E1 links.
Near-End
Receiving
Status
Default: Unknown
Issue 03 (2012-11-30)
172
5 IMA
Field
Value
Description
Near-End
Transmitting
Status
Far-End
Receiving
Status
Issue 03 (2012-11-30)
173
5 IMA
Field
Value
Description
Far-End
Transmitting
Status
Issue 03 (2012-11-30)
174
6 ETH-OAM
ETH-OAM
Issue 03 (2012-11-30)
175
6 ETH-OAM
Definition
As a protocol based on the MAC layer, ETH-OAM checks Ethernet links by transmitting OAM
protocol packets. Compared with the transmission medium, this protocol is independent. The
OAM packets are processed only at the MAC layer, having no impact at other layers on the
Ethernet. Moreover, as a low-rate protocol, the ETH-OAM protocol occupies a low bandwidth.
Thus, this protocol does not affect services carried on the link.
Application
Figure 6-1 shows the application of Ethernet service OAM and Ethernet port OAM.
Figure 6-1 Application of Ethernet service OAM and Ethernet port OAM
Ethernet
Port OAM
P
Router1
Ethernet
Port OAM
CE1
PE2
PE1
CE3
Router3
CE2
Custom
Network
Router2
Access
Network
Core
Network
CE4
Access
Network
Custom
Network
OptiX NE
The application of Ethernet service OAM is based on services. It realizes the end-to-end
Ethernet link maintenance on the basis of each maintenance domain.
The Ethernet port OAM application does not focus on the specific service, but focuses on
the end-to-end Ethernet link maintenance between two directly-connected equipment in
the first mile (EFM).
The Ethernet service between router 1 and router 3 involves NEs CE1, PE1, P, PE2, and CE3
in transmission.
Issue 03 (2012-11-30)
176
6 ETH-OAM
On the premise that router 1 supports the Ethernet port OAM protocol, Ethernet port OAM can
be used to test the end-to-end Ethernet link continuity and performance from client-side router
1 to NE CE1 at the access-layer.
Ethernet service OAM can be used to test the continuity and performance of end-to-end Ethernet
links where the service is carried from CE1 at the access layer and CE3.
In terms of the maintenance domain, Figure 6-1 shows two main application scenarios of
Ethernet service OAM.
l
Maintenance domain related to users at the access layer: MEPs are created on CE1 and
CE3 at two ends of the network, and MIPs are created on PE1 and PE2. In this manner, all
links where the customer services are carried on the entire Ethernet can be detected.
Maintenance domain related to operators at the core layer: MEPs are created on PE1 and
PE2. In this manner, the link quality on the access side can be ignored and the network of
carriers is focused on.
Purpose
Comparison between ETH-OAM and existing network maintenance and fault locating methods
is described as follows:
l
The current frame test method is based on only the encapsulation format where the data of
the same type is contained. Thus, this test method is not applicable to other encapsulation
formats (such as GFP encapsulation format and HDLC encapsulation format) where the
data of different types is contained.
The current port loopback function focuses on all packets at the port. Thus, the loopback
cannot be performed for a specific service selectively.
OAM Auto-Discovery
OAM auto-discovery is the first phase for running the Ethernet port OAM protocol. By
exchanging the information OAM protocol data unit (OAMPDU) periodically, equipment at two
ends inform each other of the capability that supports the Ethernet port OAM protocol.
OAM auto-discovery is a prerequisite to realize the link performance monitoring, fault locating,
and remote loopback.
Issue 03 (2012-11-30)
177
6 ETH-OAM
Fault Locating
When faults (including link faults, dying gasps, and critical events) occur at the local end, the
local OAM entities transmit the OAMPDU where the fault information is contained to the remote
end. In this manner, the fault notification is realized.
Remote Loopback
By transmitting the loopback control OAMPDU to the remote OAM entity, the local OAM entity
applies for the loopback from the remote OAM entity.
In the loop mode, all packets, excluding OAMPDU packets, revert back to original routes. In
the case of equipment installation and troubleshooting, the remote loopback provides necessary
assistance.
Selfloop Test
Ethernet service processing boards enabled with the self-loop detection function can detect a
self-loop. In self-loop mode, a fiber from the transmit port is looped back to the receive port on
the same board.
When a self-loop is detected, an alarm is reported and ports where the self-loop occurs are
blocked.
6.2.3 Availability
The Ethernet port OAM function requires the support of the applicable equipment, boards, and
software.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
T2000
U2000
178
6 ETH-OAM
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Applicable
Board
Applicable Version
Applicable Equipment
N1PEG16
N1PEX1
N1PETF8
Issue 03 (2012-11-30)
R1PEFS8
Q1PEGS2
R1PEGS1
N1PEG8
N1PEX2
N2PEX1
N1PEFF8
R1PEF4F
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
179
6 ETH-OAM
Remarks
OAM working
mode
Remote
loopback
Maintenance Principles
None.
Working Principle
As shown in Figure 6-2, the Code field of the OAMPDU is 0x00. This indicates that the
OAMPDU is the information OAMPDU and used for OAM auto-discovery.
Issue 03 (2012-11-30)
180
6 ETH-OAM
Octets
Fields
Fixed Values
Destination Address
01-80-c2-00-00-02
Source Address
Length/Type
88-09
Subtype
0x03
Flags
Code
0x00
42-1496 Data/Pad
4
FCS
INFORMATION OAMPDU
Octets
Fields
1
1
1
2
1
1
2
3
4
Information Type
Information Length
OAM Version
Revision
State
OAM Configuration
OAMPDU Configuration
OUI
Vendor Specific Information
Fixed
Values
0x01
0x10
0x01
1
1
1
2
1
1
2
3
Information Type
Information Length
OAM Version
Revision
State
OAM Configuration
OAMPDU Configuration
OUI
0x02
0x10
0x01
1
1
1
2
Information Type
Information Length
OUI
Organization Specific Value
0xFE
ORGANIZATION SPECIFIC
INFORMATION TLV
The Data field of the information OAMPDU includes local information type-length-value (TLV)
domain and remote information type-length-value (TLV) domain. A OAM configuration byte
is contained in the TLV domain. When the link is normal, successful OAM auto-discovery
depends on information of bit digits in the OAM configuration bytes. Also, the information of
bit digits decides what functions are performed after OAM auto-discovery succeeds.
Table 6-1 lists details of OAM configuration bytes.
Issue 03 (2012-11-30)
181
6 ETH-OAM
Name
Description
7-5
Reserved
Variable retrieval
Link events
Unidirectional support
OAM mode
Issue 03 (2012-11-30)
182
6 ETH-OAM
Table 6-2 Mapping relation between OAM working modes and OAM functions
OAM Capability
OAM Mode
Active Mode
Passive
Mode
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
When the Ethernet port OAM protocol is enabled for the port on the Ethernet service processing
board, the information OAMPDU is broadcast periodically. At the same time, the information
OAMPDU from the opposite port is received and processed. In this way, two peer ends exchange
the OAM information (including OAM configuration information and OAM state information)
to establish the OAM connection between two ends.
Function
The following functions are available at the port only when the OAM auto-discovery succeeds:
link performance monitoring, fault locating and loopback.
Working Principle
As shown in Figure 6-3, the Code field of the OAMPDU is 0x01. This indicates that the
OAMPDU is the event notification OAMPDU and used for notifying the bit error performance
event mutually.
Issue 03 (2012-11-30)
183
6 ETH-OAM
Fields
Sequence Number
SEQUENCE NUMBER
Event Type
Event Length
Event Time Stamp
Errored Svmbol Window
Errored Svmbol Threshol
Errored Svmbol
Error Running Total
Event Running Total
1
1
2
8
8
8
8
4
Octets
Fields
Fixed Values
Destination Address
01-80-c2-00-00-02
Source Address
Length/Type
88-09
Subtype
0x03
Flags
Code
0x01
42-1496 Data/Pad
4
FCS
ENENT
NOTIFICATION
OAMPDU
Fixed
Values
0x01
0x28
To perform performance statistics from different aspects, the link performance monitoring is
classified into bit error frame event monitoring, error frame second event monitoring, error frame
period event monitoring.
l
Issue 03 (2012-11-30)
Trigger condition of error frame events: Within a period of the error frame monitor window,
the number of actually received error frames is larger than the configured threshold value.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
184
6 ETH-OAM
NOTE
The period of the error frame monitor window indicates the time period for each error frame statistics.
Trigger condition of error frame second events: Within the specific seconds, the detected
error frame seconds is larger than defined threshold value.
NOTE
When the error frame number is larger than 0 within a second, this second is called an error frame
second.
Trigger condition of error frame period events: Among the received frames of a specific
amount, the number of error frames is larger than defined threshold value.
When the Ethernet port OAM protocol is enabled at a port, the port queries the RMON statistic
count of the hardware chip periodically to acquire the information such as the number of accurate
packets and the number of error packets. You can find out whether the preceding three
performance events occur or not according to corresponding processing of the information. In
the case that a performance event occurs, the opposite end is informed of this event over the
event notification OAMPDU. After receiving the notification, the opposite equipment reports
the ETHOAM_RMT_SD alarm for maintenance personnel to perform troubleshooting.
Function
The link performance monitoring function is used to precisely analyze and monitor the link
performance within a specific range.
According to actual requirements, the user can configure window values and threshold values
of three link performance events respectively on the U2000. In this case, whether the link
performance degrades to the threshold can be detected.
Working Principle
When faults such as fiber cuts are detected in the receive direction at the port, the Ethernet port
OAM protocol is informed of these faults through the hardware module. Then the protocol
modifies 0-2 bit digits that correspond to flag domain fields of the OAMPDU. Thus, the event
notification OAMPDU packets are constructed and transmitted to the opposite equipment.
According to the flag domain fields of the OAMPUD, the opposite equipment can check whether
faults occur at the opposite entity and what fault types are. Based on the results, the opposite
equipment reports the fault alarms. In this way, the fault detection is realized.
Function
At the external physical port on the Ethernet service processing board, enable the Ethernet port
OAM protocol. Once the local end reports the fault alarms, it indicates that the link faults occur
in the receive direction of the opposite equipment.
Issue 03 (2012-11-30)
185
6 ETH-OAM
Working Principle
As shown in Figure 6-4, the code field of the OAMPDU is 0x04. This indicates that the
OAMPDU is the loopback control OAMPDU and is used for the loopback test.
Figure 6-4 Packet format of the loopback control OAMPDU
Octets
Fields
Fixed Values
Destination Address
01-80-c2-00-00-02
Source Address
Length/Type
88-09
Subtype
0x03
Flags
Code
1+41
4
0x04
Data/Pad
Octets
Fields
Fixed
Values
0x01
ENABLE REMOTE
LOOPBACK COMMAND
FCS
LOOPBACK
CONTROL OAMPDU
0x02
DISABLE REMOTE
LOOPBACK COMMAND
The loopback transmit end transmits the loopback control OAMPDU packets to the opposite
end first. After receiving the packets, the opposite end checks whether it can respond to the
remote loopback first. If it can respond to the remote loopback, the opposite end starts remote
loopback and transmits a response packet back to the end that initiates the loopback at the same
time.
After receiving the response packets from the response end, the transmit end analyzes the packets
to confirm that the opposite end is in the response loopback state. Then the transmit end starts
the loopback. In this manner, the entire process of loopback initiation is complete.
During the entire loopback, the transmission trails of the data packets are marked by arrows in
Figure 6-5.
Issue 03 (2012-11-30)
186
6 ETH-OAM
Local
Remote
Client
Client
LLC
LLC
OAM
OAM
MAC CTRL
MAC CTRL
MAC
MAC
RS
RS
PCS
PCS
PMA
PMA
PMD
PMD
MEDIUM
MEDIUM
Tx
Rx
Tx
Rx
Transmit direction: Transmits all kinds of packets, including OAM protocol packets and
data service packets.
Receive direction: Extracts only the OAM protocol packets and discards other packets.
Receive direction: Transmits all packets, excluding the OAM packets, back to the transmit
end of this port for loopback.
Transmit direction: Transmits only the packets that are received in the receive direction
during loopback externally.
NOTE
During the loopback, the OAM protocol packets are transmitted and received normally at both the loopback
initiation end and loopback response end.
Function
The loopback is a method to locate the faults and test the performance. Compare the number of
transmitted packets with that of received packets in loopback. The result can be used to detect
link performance and link faults in this link bidirectionally from the loopback initiation end to
the loopback response end.
187
6 ETH-OAM
the same board. When a self-loop is detected, an alarm is reported and ports where the self-loop
occurs are blocked.
Working Principle
The selfloop test is a function developed by Huawei on the basis of the IEEE 802.3ah protocol.
By testing and blocking the selfloop port, you can solve the port loop problems.
l
The selfloop test packets are constructed as the Ethernet port OAM protocol packets, with
a packet type of protocol reservation type 6. The first eight significant reserved bits in the
flag fields are used to carry the ID of the transmit port.
When the selfloop test is enabled at a port, the specified selfloop check packets are
transmitted from the port. One packet is transmitted each second.
When a port receives the selfloop check packets, it compares the source MAC address
carried in the packets with its own MAC address. If the former MAC address is the same
with the later, it indicates that the local port and the opposite port are located in the same
Ethernet service processing board. At this moment, further compare IDs of two ports. If
the IDs are the same, they are the same port actually. In this case, this indicates the port
selfloop. If the IDs are different, it indicates the intra-board port selfloop.
Function
When the selfloop function is enabled at all ports on the equipment, the ring network that is
available during the networking can be detected. In addition, the ETHOAM_SELF_LOOP alarm
is reported. When the selfloop function is enabled at the port, the protocol closes the selfloop
port automatically after detecting the selfloop port.
VCG
MAC
Board
MAC
VCG
Board
Physical connection
OAMPDU interworking available
Issue 03 (2012-11-30)
OptiX NE
188
6 ETH-OAM
If two sets of directly connected equipment support the Ethernet port OAM protocol, first enable
the directly connected protocol at the connected ports. Then, the protocol starts the autodiscovery. If the auto-discovery succeeds, you can enable the following functions as required.
l
After the auto-discovery succeeds, the protocol automatically monitors the link
performance of the link. If Remote Alarm Support for Link Event is set to Enabled,
when detecting that the error frames or error signals cross the threshold, the protocol sends
a message about the link performance event to the opposite equipment. After receiving the
message, the opposite equipment reports an alarm indicating the link performance event.
If detecting a fiber cut in the receive direction of the local equipment, the protocol sends a
message about the fiber cut to the opposite equipment. After receiving the message, the
opposite equipment reports an alarm indicating the remote link fault.
To detect the link status from the local equipment to the opposite equipment, you can initiate
the loopback operation at the local equipment. After the opposite equipment successfully
responds to the loopback, the local equipment and the opposite equipment are in the Initiate
Loopback at Local status and the Respond Loopback of Remote status respectively. At
this time, you can send data services at a certain rate from the local equipment, and then
check the RMON statistics of transmitted and received data services. By comparing the
count of transmitted services and the count of received services, you can determine the link
performance.
Enable the selfloop detection function for all the external physical ports on the local
equipment. Then, if any loop is detected, a selfloop alarm is reported. If the function of
blocking the selfloop port is also enabled, the port where the selfloop is performed is
blocked.
Background Information
l
Different from Ethernet service OAM, Ethernet port OAM is not related to services.
The interconnected systems must support the Ethernet port OAM protocol.
Ethernet port OAM requires that the system MAC address of the Ethernet service
processing board must be unique on the entire network.
Determine the maintenance operations based on the actual OAM requirement and 6.2.6
Networking and Application.
Configuration Flow
Figure 6-7 shows the flow of configuring Ethernet port OAM.
Issue 03 (2012-11-30)
189
6 ETH-OAM
Whether
the board MAC
address is
normal
No
Yes
Auto-discovery
succeeds at the
two ends
No
Yes
Fault detection
automatically starts
Inform the
opposite end of
the detected
fault
No
Yes
No
Adjust
the monitoring
threshold?
Objective of the
loopback test is
achieved?
No
Yes
Configure the
remote loopback
No
No
Yes
Yes
Configure OAM error frame
monitoring threshold
Report
ETHOAM_RMT_CRIT_FAULT
No
Report
No
ETHOAM_RMT_SD
Report
Report
ETHOAM_SELF_LOOP
ETHOAM_VCG_SELF_LOOP
No
4
Yes
Yes
Yes
No
Yes
End
Prerequisites
l
Issue 03 (2012-11-30)
190
6 ETH-OAM
The system MAC address must be legally obtained to ensure its uniqueness in the entire
network.
To apply the settings, you need to perform a cold reset or warm reset on the board after
modifying the MAC address.
Context
Procedure
Step 1 In the NE Explorer, select the required Ethernet board and then choose Configuration >
Advanced Attribute > Set Board MAC Address from the Function Tree.
Step 2 Click Query to query the current MAC address of the board.
l
If the MAC address of the board is "00-00-00-00-00-00", the MAC address of the board is
not set. Apply for an MAC address and set it for the board. Go to Step 3.
If the MAC address of the board is not "00-00-00-00-00-00", the MAC address of the board
is set. No more operations are needed.
Step 3 Double-click the parameter field, enter the unique MAC address that you obtain, and confirm
it. Click Apply. A dialog box is displayed, indicating that the operation is successful.
Step 4 Click Close.
Step 5 Right-click on the board, and choose Cold Reset or Warm Reset from the shortcut menu to
apply the settings.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Context
l
The OAM mode includes the active mode and the passive mode. For two interconnected
systems, the OAM mode of one system must be the active mode. Otherwise, the OAM
auto-discovery fails.
if the OAM modes of the two systems are both active modes, a link fault occurs, or one
system fails to receive the OAM protocol message, an alarm is reported, indicating that the
OAM auto-discovery fails.
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Configuration > Packet
Configuration > Ethernet OAM Management > Ethernet Port OAM Management from
the Function Tree.
Issue 03 (2012-11-30)
191
6 ETH-OAM
Step 2 Click the OAM Parameter tab. Select the port, and set OAM Working Mode. In the Enable
OAM Protocol drop-down list, select Enabled.
Step 3 Click Apply.
Step 4 Click the Remote OAM Parameter tab. Click Query.
----End
Prerequisites
l
Context
When the OAM auto-discovery is successful at the two ends, the link fault detection and
performance detection are automatically started.
l
To report the detected link fault event to the opposite equipment, Remote Alarm Support
for Link Event must be set to Enabled for the local equipment.
To report the detected link fault event to the opposite equipment, the following operations
must be performed for the local equipment.
Set Remote Alarm Support for Link Event to Enabled.
Set Error Frame Period Window (Frame) and Error Frame Monitor Threshold.
After Remote Alarm Support for Link Event is set to Enabled at the opposite port, if the
opposite end detects link performance degradation, you can query the ETHOAM_RMT_SD
alarm, which is reported on the local end, by using the U2000. Based on the alarm, you can
determine the type of the link performance event.
After Remote Alarm Support for Link Event is set to Enabled at the opposite port, if the
opposite equipment detects a link fault event or encounters a fault that makes the equipment fail
to be restored (such as a power failure), you can query the ET_RMT_CRIT_FAULT alarm,
which is reported at the local end, by using the U2000. Based on the alarm, you can determine
the fault type.
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Configuration > Packet
Configuration > Ethernet OAM Management > Ethernet Port OAM Management from
the Function Tree.
Step 2 Click the OAM Parameter tab. Select the port, and set Remote Alarm Support for Link
Event to Enabled.
Step 3 Click Apply.
----End
Issue 03 (2012-11-30)
192
6 ETH-OAM
Prerequisites
l
The Ethernet port OAM function must be enabled on the remote equipment, and the OAM
auto-discovery must be successful at the two ends.
Context
After the OAM auto-discovery is successful, set the error frame period window and error frame
monitor threshold parameters, and enable the remote alarm support for link event for the local
equipment. If the local equipment detects a link event in the receive direction, it informs the
opposite equipment of the link event. If the remote alarm for the link event is also supported on
the opposite end, the opposite equipment can also inform the local equipment of the link event
that is detected on the side of the opposite end. Then, the ETHOAM_RMT_SD alarm is reported
on the local end, asking the maintenance personnel to handle the link event.
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Configuration > Packet
Configuration > Ethernet OAM Management > Ethernet Port OAM Management from
the Function Tree.
Step 2 Click the OAM Error Frame Monitor tab.
Step 3 Select a port and set the error frame monitoring parameters.
For the description of OAM error frame monitoring parameters, see OAM Errored Frame
Monitoring.
Step 4 Click Apply. A dialog box is displayed, indicating that the operation is successful. Click
Close.
----End
Prerequisites
l
For the equipment where the loopback is initiated, OAM Working Mode must be set to
Active.
The equipment that responds to the loopback must have the capability of supporting the
remote loopback.
Issue 03 (2012-11-30)
193
6 ETH-OAM
Context
l
If the port receives the command of disabling the remote loopback function sent by the
opposite OAM port, it exits from the "respond loopback of remote" state and ends the
loopback responding alarm. In the meantime, the equipment that initiates the loopback ends
the loopback initiating alarm.
After the remote loopback function is enabled, normally, service packets, except the
OAMPDU, are looped back at the remote end.
After using the remote loopback function to complete the fault locating and the link
performance detection, you should disable the remote loopback function at the end where
the loopback is initiated and then restore the services. At this time, the alarm is automatically
cleared.
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Configuration > Packet
Configuration > Ethernet OAM Management > Ethernet Port OAM Management from
the Function Tree.
Step 2 Click the OAM Parameter tab. Select the port that needs to initiate a loopback, and select
Enable Remote Loopback from the drop-down list of OAM.
Step 3 Click Apply.
----End
Prerequisites
l
All the external physical ports of the Ethernet service processing board must be enabled.
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Configuration > Packet
Configuration > Interface Management > Ethernet Interface from the Function Tree.
Step 2 Click the Advanced Attributes tab.
Step 3 Set Loopback Check to Enabled.
Step 4 Click Apply. Close the displayed dialog box.
----End
Issue 03 (2012-11-30)
194
6 ETH-OAM
Issue 03 (2012-11-30)
Alarm Name
Description
ETHOAM_RMT_LOOP
This alarm indicating the remote loopback of point-topoint Ethernet OAM. This alarm only occurs on the port
with the point-to-point OAM protocol enabled. If the
port is able to respond to loopback, it enters the
loopback response state and reports the loopback
response alarm after it receives the remote loopback
enabling command sent by the opposite OAM port. The
loopback initiator reports the loopback initiating alarm.
If the port receives the loopback disabling command,
it exits the loopback response state and ends the
loopback response alarm. The loopback initiator end
also ends the loopback initiating alarm.
ETHOAM_RMT_SD
ETHOAM_RMT_CRIT_FAULT
ETHOAM_DISCOVER_FAIL
ETHOAM_SELF_LOOP
195
6 ETH-OAM
Alarm Name
Description
ETHOAM_VCG_SELF_LOOP
Value
Description
Port
Enabled, Disabled
Default: Disabled
Issue 03 (2012-11-30)
196
6 ETH-OAM
Field
Value
Description
Active, Passive
Default: Active
Enabled, Disabled
Default: Enabled
Disabled, Enabled
Loopback Status
Initiate Loopback at
Local, Respond
Loopback of Remote,
Non-Loopback
Default: Non-Loopback
Default: Disabled
Issue 03 (2012-11-30)
197
6 ETH-OAM
Value
Description
Port
For example,
PORT3
Active, Passive
Enabled, Disabled
Default: Enabled
Disabled, Enabled
Unidirectional Operation
Enabled, Disabled
Default: Disabled
Issue 03 (2012-11-30)
Field
Value
Description
Port
198
Issue 03 (2012-11-30)
6 ETH-OAM
Field
Value
Description
Maxpps/10-Maxpps*60, in
step length of 1
Default: Maxpps
199
6 ETH-OAM
Field
Value
Description
200
6 ETH-OAM
MD
On a network, customers, service providers, and operators focus on different network segments.
Thus, management over different network segments that a service traverses is required. In
addition, different service flows need to be managed separately. Ethernet service OAM maintains
the Ethernet by performing end-to-end check based on MDs.
MA
An MA is a part of an MD. An MD can be divided into one or multiple MAs. On an operator
network, a VLAN corresponds to a service instance; on equipment, a VLAN corresponds to one
or multiple MAs. Therefore, you can detect the connectivity faults of a network that transmits
a certain service instance by dividing an MD into multiple MAs. An MA is at the same level as
the MD it belongs to.
MP
An MP is the functional entity of Ethernet service OAM. MPs are classified into MEPs and
MIPs.
Each MP has a unique MPID on the entire network. The information about the MP, in which
the service type, service ID, and VLAN tag is key, is recorded in the MAC address table, MP
table, and routing table. Once a Huawei MP is created successfully, the protocol packet carrying
the information about this MP is broadcast to the entire network periodically. Then, the other
MPs receive the protocol packet and record the information for future use.
NOTE
All OAM operations must be initiated by an MEP. An MIP cannot initiate any OAM operation or send any
OAM packet.
MEP
An MEP specifies the starting position of an MA. It functions to initiate and terminate all
OAM packets, and is associated with services. On a network, an MA and an MEP ID work
together to specify a unique MEP.
As shown in Figure 6-8, an arrow generally indicates an MEP and the direction of the
arrow specifies the direction of the MEP.
An MEP provides the following functions:
Allows transmission of all service packets without any requirement for their directions.
Issue 03 (2012-11-30)
201
6 ETH-OAM
Checks all the OAM packets that traverse the MEP along the direction of the arrow.
Allows transmission of the OAM packets at a level higher than the level of the local
MEP, without performing further checks.
Discards the OAM packets at a level lower than the level of the local MEP or at the
same level as the local MEP.
Checks all the OAM packets that traverse the MEP in the opposite direction of the arrow.
Allows transmission of the OAM packets at a level higher than the level of the local
MEP, without performing further checks.
Processes the OAM packets at the same level as the local MEP.
Discards the OAM packets at a level lower than the level of the local MEP.
An MEP can generate OAM packets at the same level as the local MEP and transmit
them in a specified direction.
l
MIP
As shown in Figure 6-8, an ellipse generally indicates an MIP, which has no direction.
An MIP provides the following functions:
Allows transmission of all the service packets.
Checks all the OAM packets that need to traverse the local MIP.
Allows transmission of all the OAM packets at a level higher than the level of the local
MIP.
Based on the special operation code or destination MAC address, processes the packets
as follows:
Transparently transmits them.
Transparently transmits and processes them.
Intercepts the OAM packets at the same level as the local MIP and processes them.
Based on the special operation code, discards all the OAM packets at a level lower than
the level of the local MIP.
Layered Management
Ethernet service OAM adds the management level fields to OAM protocol packets to provide
layered management. A higher-level MD can cross a lower-level MD, but a lower-level MD
cannot cross a higher-level MD. Based on such layered management, a service can be maintained
differently in different segments, and different service flows can be managed.
Figure 6-8 shows the logical diagram of MD layers.
Issue 03 (2012-11-30)
202
6 ETH-OAM
CE
f
Customer ME Level
Service Provider ME Level
Operator ME Level
Physical ME Level
Bridge with Bridge Ports
Maintenance End Points
Maintenance Intermediate Points
AIS Convergence Function
Currently, the protocol specifies division of 8-level layers, from level 0 to level 7. Eight
maintenance entity (ME) levels are assigned to differentiate customers, service providers, and
operators.
Layer levels are arranged in a descending order as follows: customer ME level > service provider
ME level > operator ME level.
The dashed lines in Figure 6-8 show the logical paths that Ethernet service OAM packets travel
through. MPs at different layers process OAM protocol packets as follows:
l
MPs choose to respond to or terminate OAM protocol packets at the same layer level, based
on the message types of these packets.
ITU-T Y.1731 OAM functions and mechanisms for Ethernet based networks
6.3.3 Availability
The Ethernet service OAM function requires the support of the applicable equipment, boards,
and software.
Issue 03 (2012-11-30)
203
6 ETH-OAM
Version Support
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Applicable
Board
Applicable Version
Applicable Equipment
N1PEG16
N1PEX1
N1PETF8
Issue 03 (2012-11-30)
R1PEFS8
Q1PEGS2
R1PEGS1
N1PEG8
N1PEX2
N2PEX1
N1PEFF8
204
6 ETH-OAM
Applicable
Board
Applicable Version
Applicable Equipment
R1PEF4F
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
Issue 03 (2012-11-30)
Applicable
Object
Remarks
Ethernet
service OAM
Ethernet
service OAM
205
6 ETH-OAM
Applicable
Object
Remarks
Ethernet
service OAM
Ethernet
service OAM
and LAG
Ethernet
service OAM
and LAG
MD
Issue 03 (2012-11-30)
206
6 ETH-OAM
Applicable
Object
Remarks
Maintenance
association
(MA)
l An MA needs to be associated
with an MD and can be
associated with only one MD.
Issue 03 (2012-11-30)
207
6 ETH-OAM
Applicable
Object
Remarks
MP
l To perform a CC or a loopback
(LB) test, set MEPs only at end
points; to perform a link trace
(LT) test, select some Ethernet
ports that services pass as MIPs
in addition to the setting of
MEPs.
Remarks
OAM
operation
Maintenance Principles
None.
Issue 03 (2012-11-30)
208
6 ETH-OAM
Working Principle
The source MEP constructs the continuity check message (CCM) packets and transmits them
periodically. After receiving the CCM packets from the source MEP, the sink MEP directly
enables the CC function that focuses on this source MEP.
If the sink MEP fails to receive the CCM packets from the source MEP within a check period,
it reports the alarm automatically until the link is recovered. That is, the sink MEP does not clear
the alarm until it receives the CCM packets from the sink MEP.
As shown in Figure 6-9, the CC function of the MEP1 is enabled. The MEP1 transmits the CCM
packets externally. After receiving the first CCM packet, the MEP2, MEP3 and MEP4 in the
same maintenance domain start the timer respectively to receive all CCM packets from the MEP1
periodically. Once the link is faulty, the sink MEP fails to receive the CCM packets within a
check period. In this case, the sink MEP reports the ETH_CFM_LOC alarm. The alarm is not
cleared until the link is recovered.
Figure 6-9 Continuity check diagram
MEP1
VB
MEP2
VB
MEP4
VB
MEP3
When the CC function is enabled at MEP1, MEP2, MEP3 and MEP4 at the same time, each
MEP is both the source end and sink end for the CC function. In this way, the bidirectional
continuity test is realized.
NOTE
Only the MEP can enable the continuity check and be the receive respond end for the check.
Function
Based on the regular check mechanism of the CC, the check is performed automatically after it
is configured successfully. In this way, the link fault check and fault auto-discovery are realized.
Additionally, the broadcast protocol packets generated with the CC are used to realize the pointto-multipoint detection and multipoint-to-multipoint detection, especially the detection of the
Layer 2 switching network. In this way, the network detection of the entire maintenance domain
is realized.
Issue 03 (2012-11-30)
209
6 ETH-OAM
Working Principle
Based on the bidirectional service, the loopback is a test performed manually at a time. The
source MEP constructs the loopback message (LBM) packets and adds the destination MP (MIP
or MEP) IDs to the packets. When the packets are transmitted, the timer is started.
After receiving the LBM packets, the sink MP constructs the loopback return (LBR) packets
and transmits them back to the source MEP. In this case, the loopback succeeds. If the source
MEP timer expires and fails to receive the LBR from the sink MP, the loopback fails.
As shown in Figure 6-10, MEP1 transmits LBM packets to the sink MEP4. When receiving the
packets, MIP2 and MIP3 in the same maintenance domain transparently transmit the packets.
After receiving the packets, the sink MEP4 transmits LBR packets to the source MEP1. Then
the loopback test is complete.
Figure 6-10 Loopback test diagram
MEP1
MIP2
MIP3
MEP4
LBM
LBR
NOTE
Only the MEP can initiate the loopback test, and both the MEP and MIP can be the receive end in the test.
To prevent LB packets from being transmitted continuously between different NEs, the OAM protocol
does not support coexistence of two management units on an NE. That is, different boards on the same NE
should not have the same loopback local identification (LLID).
Function
The loopback can be used to test the state of any node link from the source to the maintenance
domain. As the sink MP can be the MIP, the loopback can be used to locate the faults. Compared
with the continuity test, the loopback test is not a continuous test. Thus, you need start the test
manually each time.
210
6 ETH-OAM
Working Principle
The source MEP constructs the link trace message (LTM) packets and adds the ID of the sink
MEP to the packets for transmission. At the same time, the timer is started.
All MIPs that belong to this maintenance domain on the link continuously transmit the packets
to the sink MEP. At the same time, a link trace replay (LTR) packet is transmitted back to the
source MEP.
After the sink MEP receives the LTM packets, the packet transmission is complete. Then the
sink MEP transmits LTR packets back to the sink. In this case, the link trace test is successful.
If the source MEP timer expires and fails to receive the LTR from the sink MEP, the loopback
fails.
In addition, the parameter hop is added to the packet that is transmitted back. The hop is used
to indicate the MP ID of the returned maintenance point on the link during the link trace test. If
the first MP passed through the LTM is on the same board with the source MP, the hop starts
from 0 and accumulates in the sequence. Otherwise, the hop starts from 1.
The function of the link trace test is similar to the function of the loopback test. The difference
lies in the response to the LBM frames. In the loopback test, only the sink MP responds to the
LBM frames. In the link trace test, the MPs on the link respond to the LTM frames. According
to these response messages, the MIPs that are involved from the source MEP to the destination
MEP can be recognized, as shown in Figure 6-11
Figure 6-11 Link trace test diagram
MEP1
1
MIP2
MIP3
MEP4
LTR
LTM
LTR
LTM
LTR
LTM
1.
The source MEP1 transmits the LTM packets to the destination MEP4.
2.
After receiving the LTM packets, the MIP2 transmits the LTR packets to the source MEP1
and forwards the LTM packets at the same time.
3.
After receiving the LTM packets, the MIP3 transmits the LTR packets to the source MEP1
and forwards the LTM packets at the same time.
4.
After receiving the LTM packets, the destination MEP4 concludes the LTM packets and
transmits the LTR packets to the source MEP1.
NOTE
Only the MEP can initiate the link trace test and be the termination of the test.
Issue 03 (2012-11-30)
211
6 ETH-OAM
Function
Compared with the LB test, the LT test enhances the capability of fault locating.
1.
According to the returned MIP, the route that involves the protocol packets can be
determined.
2.
Once a link segment where the fault occurs is determined, the link state can be checked
according to the returned LTP packets. That is, links that transmit the packets back are
normal. In this manner, the faulty network segment is located.
Single-ended LM
Single-ended LM is used to count lost packets on an Ethernet link within a specified period of
time.
NOTE
LM can be performed in two ways: dual-ended LM and single-ended LM. Currently, the OptiX OSN
equipment supports single-ended LM only. To learn about dual-ended LM, see ITU-T Y.1731, OAM
functions and mechanisms for Ethernet based networks.
Single-ended LM is used for on-demand OAM. That is, a single-ended LM test is manually
triggered. In this mode, a local MEP, within a specified period of time, periodically sends packets
with ETH-LM request (ETH-LMM) information to its opposite MEP, and receives packets with
ETH-LM reply (ETH-LMR) information from its opposite MEP.
NOTE
A maintenance intermediate point (MIP) transparently transmits packets with ETH-LMM and ETH-LMR
information, without the need to support LM.
The following considers MEP (PE1) as an example to illustrate the single-ended LM process. The same
process goes to MEP (PE2).
Issue 03 (2012-11-30)
212
6 ETH-OAM
PE1
PE2
CE2
Information carried in the frame:
TxFCf
TxFCf
RxFCf
TxFCb
: MEP
1.
A local MEP periodically sends an ETH-LMM frame to its opposite MEP. An ETH-LMM
frame contains the following values:
l TxFCf: value of local counter TxFCl at the time of ETH-LMM frame transmission
2.
When receiving a valid ETH-LMM frame, the opposite MEP transmits an ETH-LMR
frame. An ETH-LMR frame contains the following values:
l TxFCf: value of TxFCf copied from the ETH-LMM frame
l RxFCf: value of local counter RxFCl at the time of ETH-LMM frame reception
l TxFCb: value of local counter TxFCl at the time of ETH-LMR frame transmission
3.
Upon receiving an ETH-LMR frame, a local MEP uses the following values to make nearend and far-end loss measurements:
l Frame lossfar-end = |TxFCf[tc] - TxFCf[tp]| - |RxFCf[tc] - RxFCf[tp]|
l Frame lossnear-end = |TxFCb[tc] - TxFCb[tp]| - |RxFCl[tc] - RxFCl[tp]|
NOTE
l TxFCf[tc], RxFCf[tc], and TxFCb[tc] represent the received ETH-LMR frame's TxFCf, RxFCf,
and TxFCb respectively. RxFCl[tc] represents the local counter RxFCl value at the time this
ETH-LMR frame was received, where tc is the reception time of the current ETH-LMR frame.
l TxFCf[tp], RxFCf[tp], and TxFCb[tp] represent the previous ETH-LMR frame's TxFCf, RxFCf,
and TxFCb respectively. RxFCl[tp] represents the local counter RxFCl value at the time the
previous ETH-LMR frame was received, where tp is the reception time of the previous ETHLMR frame.
FLR
FLR is a measure of the packet loss ratio between two MEPs that belong to the same CoS instance
on a point-to-point ETH connection. During the LM, a local MEP counts lost packets, and records
the total number of transmitted packets.
FLR is calculated as follows.
FLR = Frame loss/Total number of transmitted packets
Two-Way DM
Two-way DM can be used to measure frame delay and frame delay variation of bidirectional
service packets on an Ethernet link within a specified period of time.
Issue 03 (2012-11-30)
213
6 ETH-OAM
NOTE
DM can be performed in two ways: two-way DM and one-way DM. Currently, the OptiX OSN equipment
supports two-way DM only. To learn about one-way DM, see ITU-T Y.1731, OAM functions and
mechanisms for Ethernet based networks.
Two-way DM is used for on-demand OAM. That is, a two-way DM test is manually triggered.
In this mode, a local MEP, within a specified period of time, periodically sends packets with
ETH-DM request (ETH-DMM) information to its opposite MEP, and receives packets with
ETH-DM reply (ETH-DMR) information from its opposite MEP.
NOTE
An MIP transparently transmits packets with ETH-DMM and ETH-DMR information, without the need
to support DM.
The following considers MEP (PE1) as an example to illustrate the two-way DM process. The same process
goes to MEP (PE2).
PE1
1. Sending an ETH-DMM frame
2. Sending an ETH-DMR frame
PE2
CE2
Information carried in the frame:
TxTimeStampf
TxTimeStampf
RxTimeStampf
TxTimeStampb
: MEP
1.
A local MEP periodically sends an ETH-DMM frames to its opposite MEP. An ETH-DMM
frame contains the following values:
l TxTimeStampf: time of ETH-DMM frame transmission
2.
When receiving a valid ETH-DMM frame, the opposite MEP transmits an ETH-DMR
frame. An ETH-LMR frame contains the following values:
l TxTimeStampf: value of TxTimeStampf copied from the ETH-DMM frame
l RxTimeStampf: time of ETH-DMM frame reception
l TxTimeStampb: time of ETH-DMR frame transmission
3.
Upon receiving an ETH-DMR frame, a local MEP uses the following values to make frame
delay measurements:
l Frame delay = RxTimeb - TxTimeStampf (RxTimeb represents the reception time of
the ETH-DMR frame.)
This value contains the time the opposite node handles the DM packet, and serves as
input for frame delay variation measurement.
l Frame delay = (RxTimeb - TxTimeStampf) - (TxTimeStampb - RxTimeStampf)
Issue 03 (2012-11-30)
214
6 ETH-OAM
This value does not contain the time the opposite node handles the DM packet, and is
more accurate.
FDV
FDV is a measure of the delay variations of service packets between two MEPs that belong to
the same CoS instance on a point-to-point ETH connection. During the DM, a local MEP
measures frame delays, and records the maximum frame delay and minimum frame delay.
FDV is calculated as follows.
FDV = Frame delaymax - Frame delaymin
Network A
C_VLAN
P1
CE2
S_VLAN_A1 C_VLAN
S_VLAN_A2 C_VLAN
P2
Network B
PE2
C_VLAN
CE3
S_VLAN_B1 C_VLAN
S_VLAN_B2 C_VLAN
C_VLAN
CE4
OptiX NE
CE1-CE4 refer to the equipment on the access side. The packets that are transmitted on the
operator network contain C_VLAN tags.
1.
Issue 03 (2012-11-30)
When these packets are transmitted to the operator network (PE equipment) in the uplink,
the S_VLAN tags defined by the operator are added to the packets for service grooming.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
215
6 ETH-OAM
2.
When service packets are transmitted from one operator network to the other operator
network, the S_VLAN tags are replaced by the S_VLAN tags defined by the later operator.
3.
When service packets are transmitted from the operator network to the client-side
equipment in the downlink, the S_VLAN tags are stripped. At this time, the remote
transmission is complete.
The service transmission involves the equipment CE1-CE4 on the access side, operator network
A and operator network B. These entities have different requirements for the ETH-OAM.
l
Either operator needs to focus on OAM in their own network segments respectively and
does not focus on network segments of the other operator.
The user on the access side cares whether the service can be transmitted to the destination.
In this case, the state of the entire link needs to be detected.
As shown in Figure 6-15, multiple maintenance domains that are based on different VLAN
values are established. In this way, requirements of operators and users on the access side can
be met.
Figure 6-15 Maintenance domains based on different VLAN values
CE1/CE2
CE3/CE4
C_VLAN
Maintenance domain 1
MEP1
MEP2
Level = 5-7
PE1
MEP3
S_VLAN_A1
P1
P2
PE2
Maintenance domain 4
Maintenance domain 2
MEP4
MEP7
S_VLAN_A2
MEP8
S_VLAN_B2
Maintenance domain 3
MEP5
S_VLAN_B1
Maintenance domain 5
MEP6
MEP9
MEP10
Level = 0-4
User on the access side: Maintenance domain 1 that is based on the C_VLAN value between
access side equipment CEs at two ends of transmission network is created. Because
maintenance domain 1 covers maintenance domains 2-5 of the network operator, its
maintenance point level must be higher than levels of maintenance domains 2-5. In this
manner, the OAM packets of maintenance domain 1 are not interrupted by MEPs of
maintenance domains 2-5.
Issue 03 (2012-11-30)
216
6 ETH-OAM
Ethernet service processing boards of OptiX NG-SDH series equipment provide the ETH-OAM
function, and thus maintenance domains 4 and 5 are based on the S_VLAN tags. For services
that have multiple VLAN tags, only the most external tag is identified. Thus, the most external
tag is considered as the VLAN tag of a service.
The layered management principle defines that the upper level management domain can cross
the lower level management domain, but the lower level management domain cannot cross the
upper level management domain. In this case, the level of maintenance domain 1 must be higher
than levels of other maintenance domains. For maintenance domains 2-5, they can be set to the
same level because their domains do not overlap. For example, maintenance domains 2 and 4
are in the same level, and maintenance domains 3 and 5 are at the same level. Maintenance
domains 2-5 carry different S_VLAN tags, and thus they do not affect each other.
6.3.7.1 Creating an MD
An MD defines the range and level of the Ethernet OAM. MDs of different ranges and levels
can provide users with differentiated OAM services.
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > Ethernet
OAM Management > Ethernet Service OAM Management from the Function Tree.
Step 2 Click New, and select New Maintenance Domain.
Step 3 When the New Maintenance Domain dialog box is displayed, set Maintenance Domain
Name and Maintenance Domain Level.
NOTE
Step 4 Click Apply. The Operation Result dialog box is displayed, indicating that the operation is
successful.
Step 5 Click Close.
----End
Issue 03 (2012-11-30)
217
6 ETH-OAM
6.3.7.2 Creating an MA
An MD can be divided into several independent MAs. By creating MAs, you can associate
specific Ethernet services with MAs. This facilitates Ethernet OAM operations.
Prerequisites
l
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > Ethernet
OAM Management > Ethernet Service OAM Management from the Function Tree.
Step 2 Click New, and select New Maintenance Association.
Step 3 When the New Maintenance Association dialog box is displayed, set the required parameters.
NOTE
Step 4 Click Apply. The Operation Result dialog box is displayed, indicating that the operation is
successful.
Step 5 Click Close.
----End
Prerequisites
l
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > Ethernet
OAM Management > Ethernet Service OAM Management from the Function Tree.
Step 2 Select the created MA, click New, and select New MEP Maintenance Point.
Step 3 When the New MEP Maintenance Point dialog box is displayed, set the required parameters.
Issue 03 (2012-11-30)
218
6 ETH-OAM
NOTE
Step 4 Click Apply. The Operation Result is displayed, indicating that the operation is successful.
Step 5 Click Close.
----End
Prerequisites
l
An MA must be created.
Procedure
Step 1 In the NE Explorer, select an NE. Then, choose Configuration > Packet Configuration >
Ethernet OAM Management > Ethernet Service OAM Management in the Function Tree.
Step 2 Click OAM and select Manage Remote MEP Point. The Manage Remote Maintenance
Point dialog box is displayed.
Step 3 Set the relevant parameters.
Step 4 Click Apply. Then, the Operation Result dialog box is displayed, indicating that the operation
is successful.
Step 5 Click Close.
----End
Prerequisites
l
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > Ethernet
OAM Management > Ethernet Service OAM Management from the Function Tree.
Step 2 Select Maintenance Domain, and select Maintenance Association. Select MEP Point.
Issue 03 (2012-11-30)
219
6 ETH-OAM
Step 3 Click OAM, and select Activate CC from the drop-down list. The Operation Result dialog
box is displayed, indicating that the operation is successful.
Step 4 Click Close.
----End
Prerequisites
l
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > Ethernet
OAM Management > Ethernet Service OAM Management from the Function Tree.
Step 2 Select Maintenance Domain, and select Maintenance Association. Select MEP Point.
Step 3 Click OAM, and choose Start LB from the drop-down list. The LB Test dialog box is displayed.
Step 4 Set the required parameters. For the parameters to be set, see 6.3.9.2 Starting LB.
NOTE
l To identify the destination MP according to the MP ID, select Destination Maintenance Point ID.
l To identify the destination MP according to the MAC address, select Destination Maintenance Point
MAC Address. After selecting Destination Maintenance Point MAC Address, manually copy the
MAC address of the port at which the sink MP is configured to the value field of Destination
Maintenance Point MAC Address. See Setting the Advanced Attributes of Ethernet Ports for the
navigation path of the MAC address of the port at which the sink MEP is configured.
Step 5 Click Start Test. The Operation Result dialog box is displayed, indicating that the operation
is successful.
Step 6 Click Close. Check the test result.
Issue 03 (2012-11-30)
220
6 ETH-OAM
----End
Prerequisites
l
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > Ethernet
OAM Management > Ethernet Service OAM Management from the Function Tree.
Step 2 Select Maintenance Domain, and select Maintenance Association. Select MEP Point.
Step 3 Click OAM, and choose Start LT from the drop-down list. The LT Test dialog box is displayed.
Step 4 Set the required parameters. For the parameters to be set, see 6.3.9.3 Starting LT.
NOTE
l To identify the destination MP according to the MP ID, select Destination Maintenance Point ID.
l To identify the destination MP according to the MAC address, select Destination Maintenance Point
MAC Address. After selecting Destination Maintenance Point MAC Address, manually copy the
MAC address of the port at which the sink MP is configured to the value field of Destination
Maintenance Point MAC Address. See Setting the Advanced Attributes of Ethernet Ports for the
navigation path of the MAC address of the port at which the sink MEP is configured.
Step 5 Click Start Test. The Operation Result dialog box is displayed, indicating that the operation
is successful.
Step 6 Click Close. Check the test result.
Issue 03 (2012-11-30)
221
6 ETH-OAM
----End
Issue 03 (2012-11-30)
Alarm Name
Description
EX_ETHOAM_CC_LOS
EX_ETHOAM_MPID_CNFLCT
ETH_CFM_LOC
ETH_CFM_MISMERGE
ETH_CFM_RDI
222
6 ETH-OAM
Alarm Name
Description
ETH_CFM_UNEXPERI
ETH_EFM_DF
ETH_EFM_EVENT
ETH_EFM_LOOPBACK
ETH_EFM_REMFAULT
Issue 03 (2012-11-30)
Description
ETH_CFM_CSLS
ETH_CFM_LS
223
6 ETH-OAM
Description
ETH_CFM_SLS
ETH_CFM_UAS
Value
Description
8 bytes
0-7
Default: 4
8 bytes
Relevant Service
Issue 03 (2012-11-30)
Node
VLAN
For example, 5
224
6 ETH-OAM
Field
Value
Description
MP ID
For example, 6
Direction
Ingress, Egress
CC Status
Active, Inactive
Default: Active
Board
Port
Service ID
For example, 10
Service Name
Service Type
Issue 03 (2012-11-30)
225
6 ETH-OAM
Field
Value
Description
Activation Status
Active, Inactive
Default: Active
6.3.9.2 Starting LB
Perform loopback for the Ethernet service without interrupting the service to check the
connectivity of the service and thus to locate and rectify the fault.
Table 6-10 list the parameters for starting LB.
Table 6-10 Parameters for starting LB
Field
Value
Description
8 bytes
Maintenance Association
Name
8 bytes
For example, 5
Destination Maintenance
Point MAC Address
The Destination
Maintenance Point MAC
Address parameter indicates
the MAC address of the port
where the destination
maintenance point (MP) is
located.
Click A.6.10 Destination
Maintenance Point MAC
Address(Ethernet Service
OAM Management) for
more information.
Issue 03 (2012-11-30)
226
6 ETH-OAM
Field
Value
Description
1-255
Default: 3
64-1400
Default: 64
0-7
Default: 7
Test Result
Character string
6.3.9.3 Starting LT
Perform loopback trace for the Ethernet service without interrupting the service to check the
connectivity of the service and thus to locate and rectify the fault.
Table 6-11 lists the parameters for starting LT.
Issue 03 (2012-11-30)
227
6 ETH-OAM
Value
Description
8 bytes
Maintenance Association
Name
8 bytes
For example, 5
Destination Maintenance
Point MAC Address
The Destination
Maintenance Point MAC
Address parameter indicates
the MAC address of the port
where the destination
maintenance point (MP) is
located.
Click A.6.10 Destination
Maintenance Point MAC
Address(Ethernet Service
OAM Management) for
more information.
Destination Maintenance
Point ID/MAC
For example,
1A-2B-3C-4D-5E-6F
Issue 03 (2012-11-30)
228
6 ETH-OAM
Field
Value
Description
Hop Count
1-64, /
Default: /
Character string
Issue 03 (2012-11-30)
229
7 MPLS OAM
MPLS OAM
230
7 MPLS OAM
Issue 03 (2012-11-30)
231
7 MPLS OAM
Tunnel OAM
l
Description
The tunnel OAM mechanism helps to effectively detect, identify, and locate internal defects
at the tunnel layer of an MPLS network. The equipment triggers the protection switching
based on the OAM detection status. Therefore, quick fault detection and service protection
can be achieved.
PW OAM
l
Description
The PW OAM mechanism helps to effectively detect, identify, and locate internal defects
at the PW layer of a network. The equipment triggers the protection switching based on the
OAM detection status. Therefore, quick fault detection and service protection can be
achieved.
Issue 03 (2012-11-30)
232
7 MPLS OAM
sends the response packets to the local equipment. The local equipment can determine that
the PW connectivity is normal only after receiving the response packets.
CV packet
CV packets are used to check connectivity of LSPs. The ingress node sends CV packets at
an interval of 1s, and the egress node checks the number of received CV packets and
correctness of the CV packets in any 3s.
Figure 7-1 shows the format of a CV packet.
19
Label: 14 (OAM alert label)
Function type (0x01)
31 bit
22 23
EXP S
TTL: 1
BIP 16 (2 octets)
233
7 MPLS OAM
Trail termination source identifier (TTSI): TTSI consists of the LSR ID and LSP ID of
the ingress node. It is used to uniquely identify an LSP on a network.
16-bit interleaved parity (BIP 16): If a CV packet contains an incorrect BIP 16, the
receiver discards the packet. When CV packets are continuously discarded due to
incorrect BIP 16s, the equipment will notify the NMS.
Reserved: The reserved field is reserved for future use and is set to all 0s.
Padding: The field is for padding bytes and is set to all 0s.
l
FFD packet
FFD packets are also used to check connectivity of LSPs. The ingress node sends FFD
packets at an interval of 3.3 ms to 500 ms, and the egress node checks the number of received
FFD packets and correctness of the FFD packets in any three packet transmission periods.
The transmission interval can be 3.3 ms, 10 ms, 20 ms, 50 ms, 100 ms, 200 ms, or 500 ms.
Figure 7-2 shows the format of an FFD packet.
19
Label: 14 (OAM alert label)
Function type (0x07)
22 23
EXP S
31
bit
TTL: 1
Frequency (1 octet)
Padding (all 0x00)
(17 octets)
BIP 16 (2 octets)
Table 7-1 provides the differences between an FFD packet and a CV packet.
Table 7-1 Format differences between an FFD packet and a CV packet
Issue 03 (2012-11-30)
Field
CV Packet
FFD Packet
Function type
234
7 MPLS OAM
Field
CV Packet
FFD Packet
Frequency
None.
FDI packet
Generated by LSRs that detect defects, an FDI packet is used to respond to defects detected
by transmitting CV or FFD packets. FDI packets help to prevent the affected LSP from
generating an alarm.
An LSR generates an FDI packet to notify the egress node of the fault.
After the OAM function is disabled for the ingress node, an FDI packet is generated to
notify the egress node. After the OAM adaptive mode is enabled for the egress node,
the egress node responds to the FDI packet and stops OAM detection.
BDI packet
After detecting a defect, the egress node transmits a BDI packet that carries the defect
information to the upstream ingress node along a reverse tunnel.
7.2.3 Ping
The ping function helps to check connectivity of links, tunnels, and virtual circuits. With a similar
implementation method, the tunnel ping and PW ping functions are applicable to different layers
and detected objects. The tunnel ping function is commonly used at the MPLS layer, and the
PW ping function is commonly used at the PW layer.
Tunnel Ping
The tunnel ping function uses MPLS echo request and MPLS echo reply packets to check
availability of a tunnel. An MPLS echo request packet carries the forwarding equivalence class
Issue 03 (2012-11-30)
235
7 MPLS OAM
(FEC) information to be checked and is transmitted with other packets that also belong to this
FEC along the tunnel. In this way, this tunnel is checked.
When the tunnel ping function is used, an MPLS echo request packet is transmitted to the egress
node of the tunnel and then the control plane of the egress node checks whether the local node
is the egress of the FEC. The tunnel ping is used to check whether a tunnel is successfully created.
PW Ping
With a similar implementation method to the tunnel ping function, the PW ping function helps
you manually detect the connection status of a virtual circuit. If a PW fails to forward data, the
control plane in charge of creating the PW cannot detect this failure. In such scenarios, the PW
ping function works.
The PW ping function uses MPLS echo request and MPLS echo reply packets to check
availability of a PW. An MPLS echo request packet carries the FEC information to be checked
and is transmitted with other packets that also belong to this FEC along the PW. In this way,
this PW is checked.
7.2.4 Traceroute
Similar to the ping function, the MPLS traceroute function helps to check the connectivity of
MPLS tunnels. In addition, the MPLS traceroute function can locate a faulty node accurately.
PW Traceroute
The PW traceroute function uses the MPLS echo request and MPLS echo reply packets to check
availability of a PW. An MPLS echo request packet carries the FEC information to be checked
and is transmitted with other packets that also belong to this FEC along the PW. In this way,
this PW is checked.
Issue 03 (2012-11-30)
236
7 MPLS OAM
7.4 Availability
The MPLS OAM function requires the support of the applicable equipment, boards, and
software.
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
237
7 MPLS OAM
Applicable Version
Applicable Equipment
N1PEG16
N1PEX1
N1PETF8
R1PEFS8
Q1PEGS2
R1PEGS1
N1PEG8
N2PEX1
N1PEX2
N1PEFF8
R1PEF4F
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
Issue 03 (2012-11-30)
Applicable
Board
Applicable Version
Applicable Equipment
N1PETF8
R1PEFS8
Q1PEGS2
R1PEGS1
N1PEG8
N2PEX1
N1PEX2
N1PEFF8
R1PEF4F
238
7 MPLS OAM
Applicable
Board
Applicable Version
Applicable Equipment
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
Issue 03 (2012-11-30)
Applicable
Object
Remarks
Tunnel OAM
and PW OAM
Tunnel OAM
Tunnel OAM
Tunnel OAM
239
Issue 03 (2012-11-30)
7 MPLS OAM
Applicable
Object
Remarks
Tunnel OAM
Tunnel OAM
Tunnel OAM
Tunnel OAM
PW OAM
PW OAM
PW OAM
PW OAM
240
7 MPLS OAM
Applicable
Object
Remarks
PW OAM
PW OAM
Remarks
PW OAM
PW OAM
Tunnel OAM
Maintenance Principles
None.
7.6 Principles
This topic describes the principles of MPLS OAM.
241
7 MPLS OAM
CV/FFD
BDI
Transit
LSR
LSP
CV/FFD
Egress
LSR
BDI
2.
3.
4.
The egress LSR compares the received information fields, such as packet types and
frequencies, with the fields recorded locally to check correctness of the packet. The egress
LSR also counts numbers of correct packets and incorrect packets during the detection
period. In this way, connectivity of a tunnel is monitored in real time.
5.
When detecting a defect on the tunnel, the egress LSR analyzes the defect type and transmits
the BDI packet that carries the defect information to the ingress LSR through the reverse
tunnel. The ingress node then can learn about the defect status in a timely manner.
NOTE
If a protection group is correctly configured, the corresponding protection switching is also triggered.
Transit
LSR
Egress
LSR
2.
If the transit LSR fails to forward the packet, it sends a reply message containing an error code.
Issue 03 (2012-11-30)
242
3.
7 MPLS OAM
The egress LSR processes the received MPLS echo request packet and returns an MPLS
echo reply packet.
Transit
LSR
Egress
LSR
The ingress LSR transmits an MPLS echo request packet with its TTL being 1.
2.
The transit LSR processes the received MPLS echo request packet and returns an MPLS
echo reply packet.
NOTE
In this example, the transit LSR functions to terminate the MPLS echo request packet because TTL is
equal to 1. That is, the transit LSR does not transparently transmit the received packet.
3.
On reception the MPLS echo reply packet, the ingress LSR changes the value of TTL to 2
and transmits another MPLS echo request packet.
4.
The transit LSR functions as a transit node of the MPLS echo request packet because TTL is equal to 2.
That is, the transit LSR transparently transmits the received packet.
5.
The egress LSR processes the received MPLS echo request packet and returns an MPLS
echo reply packet.
PW Ping/Traceroute
The PW ping/traceroute function uses the MPLS echo request and MPLS echo reply packets to
validate a PW. The MPLS echo request packet carries the FEC information to be checked and
is transmitted with the other packets that also belong to this FEC along the PW. In this way, this
PW is checked.
Issue 03 (2012-11-30)
243
7 MPLS OAM
NOTE
The implementation of the PW ping/traceroute function is similar to the implementation of the tunnel ping/
traceroute function. Therefore, this topic does not describe the implementation process of the PW ping/
traceroute function.
Access Layer
P
PSN
CE
PE
PE
CE
CE
CE
P
Core Layer
Custom Layer
Custom Layer
LSP
Table 7-2 lists different application scenarios of the CV/FFD, Ping and Traceroute.
Table 7-2 Application scenarios of MPLS OAM detection methods
OAM Type
Usage
Application Scenario
CV/FFD
Unidirectional connectivity
check
Ping
Bidirectional connectivity
check
Traceroute
Fault location
Issue 03 (2012-11-30)
244
7 MPLS OAM
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > MPLS Management > Unicast Tunnel Management from the Function
Tree.
Step 2 Click the OAM Parameters tab. In the OAM Status area, click Enabled.
Step 3 Click Apply. The Operation Result dialog box is displayed, indicating that the operation is
successful.
Step 4 Click Close.
----End
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Packet Configuration > MPLS
Management > Unicast Tunnel Management from the Function Tree.
Step 2 Click the OAM Parameters tab and set the parameters.
Issue 03 (2012-11-30)
245
7 MPLS OAM
NOTE
Step 3 Click Apply. The Operation Result dialog box is displayed, indicating that the operation is
successful.
Step 4 Click Close.
----End
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > MPLS Management > Unicast Tunnel Management from the Function
Tree.
Step 2 Click the OAM Parameters tab and select a tunnel. Click OAM Operation and choose Start
CV/FFD from the drop-down menu. The Operation Result dialog box is displayed, indicating
that the operation is successful.
NOTE
You can start a CV/FFD only for a tunnel whose Node Type is set to Ingress.
246
7 MPLS OAM
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Packet Configuration > MPLS
Management > Unicast Tunnel Management from the Function Tree.
Step 2 Click the OAM Parameters tab and select a tunnel. Click OAM Operation and choose Ping
Test from the drop-down menu. The Ping Test dialog box is displayed.
NOTE
When the Node Type of the tunnel is Ingress, you can perform the ping test.
247
7 MPLS OAM
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Packet Configuration > MPLS
Management > Unicast Tunnel Management from the Function Tree.
Step 2 Click the OAM Parameters tab and select a tunnel. Click OAM Operation and choose
Traceroute Test from the drop-down menu. The Traceroute Test dialog box is displayed.
Issue 03 (2012-11-30)
248
7 MPLS OAM
NOTE
To support the traceroute test, the Tunnel Type of the tunnel must be Ingress.
Prerequisites
l
A PW must be created.
Procedure
Step 1 In the NE Explorer, select the required NE and then choose Configuration > Packet
Configuration > MPLS Management > PW Management from the Function Tree.
Step 2 Click the PW OAM Parameter tab. In the OAM Status area, click Enabled.
Step 3 Click Apply. The Operation Result dialog box is displayed, indicating that the operation is
successful.
Step 4 Click Close.
----End
Prerequisites
l
A PW must be created.
Issue 03 (2012-11-30)
249
7 MPLS OAM
Procedure
Step 1 In the NE Explorer, select the required NE and then choose Configuration > Packet
Configuration > MPLS Management > PW Management from the Function Tree.
Step 2 Click the PW OAM Parameter tab and set the parameters.
NOTE
l Detection Packet Period (ms): When Detection Mode is set to CV, Detection Packet Period (ms)
is always 1000 ms and cannot be changed; when Detection Mode is set to FFD, Detection Packet
Period (ms) can be set to a required value.
l When configuring Ethernet services carried by PWs, you can configure PW OAM if PW APS is
configured for the Ethernet services.
Step 3 Click Apply. The Operation Result dialog box is displayed, indicating that the operation is
successful.
Step 4 Click Close.
----End
Prerequisites
l
A PW must be created.
Procedure
Step 1 In the NE Explorer, select the required NE and then choose Configuration > Packet
Configuration > MPLS Management > PW Management from the Function Tree.
Step 2 Click the PW OAM Parameter tab and select a PW. Click OAM Operation and choose Ping
Test from the drop-down menu.
Step 3 In the Ping Test dialog box that is displayed, set the required parameters.
Issue 03 (2012-11-30)
250
7 MPLS OAM
Step 4 Click Start Test and view the ping test result.
NOTE
In the test result table, check information about the tested PW including the number of transmitted packets,
number of received packets, packet loss ratio, and delay information. The information helps you measure
the status of the PW.
----End
Prerequisites
l
A PW must be created.
Procedure
Step 1 In the NE Explorer, select the required NE and then choose Configuration > Packet
Configuration > MPLS Management > PW Management from the Function Tree.
Step 2 Click the PW OAM Parameter tab and select a PW. Click OAM Operation and choose
Traceroute Test from the drop-down menu.
Step 3 In the Traceroute Test dialog box that is displayed, set the required parameters.
Issue 03 (2012-11-30)
251
7 MPLS OAM
Step 4 Click Start Test and view the traceroute test result.
----End
252
7 MPLS OAM
A static tunnel, that is, Tunnel 1, is available on the path of NE1-NE2-NE4. The CV, Tunnel
Ping and Tunnel Traceroute check methods are used on Tunnel 1 through the MPLS OAM
mechanism. A static reverse tunnel, that is, Tunnel 3, is available on the path of NE4-NE2NE1. This tunnel is used to transmit the check results.
A static tunnel, that is, Tunnel 2, is available on the path of NE4-NE3-NE1. The FFD,
Tunnel Ping and Tunnel Traceroute check methods are used on Tunnel 2 through the MPLS
OAM mechanism. A static reverse tunnel, that is, Tunnel 4, is available on the path of NE1NE3-NE4. This tunnel is used to transmit the check results.
NE2
NE1
CompanyA
NE4
Company B
Service Planning
he service shown in Figure 7-7is taken as an example.
l
NE1 initiates the CV, Tunnel Ping, and Tunnel Traceroute check. NE4 notifies NE1 of the
faults through the reverse tunnel, that is, Tunnel 3.
NE4 initiates the FFD check. NE1 notifies NE4 of the faults through the reverse tunnel,
that is, Tunnel 4.
Issue 03 (2012-11-30)
Attribute
Value
NE
NE1
NE4
OAM Status
Enabled
Enabled
Detection Mode
Manual
Manual
CV
FFD
1000
500
253
7 MPLS OAM
Prerequisites
You must understand the networking, requirements and service planning of the example.
Procedure
Step 1 On the U2000, configure the MPLS OAM. For the configuration method, see 7.8.2 Setting the
MPLS Tunnel OAM Parameters.
The parameters of NE1 are as follows:
l OAM Status: Enabled
l Detection Mode: Manual
l Detection Packet Type: CV
l Detection Packet Period(ms): 1000
The parameters of NE4 are as follows:
l OAM Status: Enabled
l Detection Mode: Manual
l Detection Packet Type: FFD
l Detection Packet Period(ms): 500
Step 2 On NE1, enable the CV/FFD check for the tunnel. For details, see 7.8.3 Starting a CV/FFD
for an MPLS Tunnel.
Step 3 On NE4, enable the CV/FFD check for the tunnel. For details, see 7.8.3 Starting a CV/FFD
for an MPLS Tunnel.
Step 4 On NE1, perform the MPLS Tunnel Ping check. For details, see 7.8.4 Performing an MPLS
Tunnel Ping Test.
Step 5 On NE1, perform the MPLS Tunnel Traceroute check. For details, see 7.8.5 Performing an
MPLS Tunnel Traceroute Test.
----End
254
7 MPLS OAM
The tunnel contains two PWs. You can configure PW APS to protect the Ethernet services
between the NodeB and the RNC.
NodeB
NE1
PW2
RNC
NE2
Tunnel
PW
Service Planning
The following table lists the configuration parameters of PW OAM.
Table 7-4 Configuration parameters of PW OAM
Parameter
Value
NE
NE1
NE2
OAM Status
Enabled
Enabled
Detection Mode
Manual
Manual
FFD
FFD
3.3 ms
3.3 ms
Prerequisites
You must be familiar with the networking, requirements, and service planning of the example.
Procedure
Step 1 Configure PW OAM. For the configuration method, see 7.9.2 Setting the Parameters of PW
OAM.
Issue 03 (2012-11-30)
255
7 MPLS OAM
Value
Description
Tunnel ID
For example, 3
Tunnel Name
Character string
Node Type
Issue 03 (2012-11-30)
256
7 MPLS OAM
Field
Value
Description
OAM Status
Enabled, Disabled
Detection Mode
Auto-Sending, Manual
CV, FFD
Issue 03 (2012-11-30)
257
7 MPLS OAM
Field
Value
Description
Reverse Tunnel
For example, 3
CV/FFD Status
Stop, Start
LSP Status
dServer, dLOCV,
dTTSI_Mismatch,
dTTSI_Mismerge, dExcess,
dUnknown, SD, SF, BDI,
FDI
0-655350
Issue 03 (2012-11-30)
258
7 MPLS OAM
Field
Value
Description
SD Threshold
0-100
SF Threshold
0-100
Source Node
Sink Node
Value Range
Description
PW ID
PW Type
Default: Ethernet
Issue 03 (2012-11-30)
OAM Status
Enabled, Disabled
Associate AC State
Enabled, Disabled
259
7 MPLS OAM
Field
Value Range
Description
Detection Mode
Auto-Sending, Manual
CV, FFD
SD Threshold(%)
0 to 100
Default: 0
Issue 03 (2012-11-30)
260
7 MPLS OAM
Field
Value Range
Description
SF Threshold(%)
0 to 100
Default: 0
SDSF
LSR ID to be Received
PW ID to be Received
Issue 03 (2012-11-30)
Specifies the PW ID to be
received.
Multiple PWs of the same
type cannot take the same
PW ID. Multiple PWs of
different types can take the
same PW ID.
Default: 255.255.255.255
Remote Disable PW
Duration(ms)
Default: 255.255.255.255
261
7 MPLS OAM
Value
Description
Packet Count
For example, 3
EXP Value
0-7
TTL
1-255
Transmit Interval(10ms)
10-1000
Packet Length
64-1400
Wait-to-Response Timeout
Time(10ms)
50-6000
Response Mode
Character string
Test Result
NOTE
This parameters can be
currently set to only IPv4 UDP
Response. If this parameter is
set to No Response, a timeout
message is displayed as the test
result.
Value
Description
EXP Value
0-7
TTL
Issue 03 (2012-11-30)
1-255
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
7 MPLS OAM
Field
Value
Description
Packet Length
84-1400
Wait-to-Response Timeout
Time(10ms)
50-6000
Response Mode
Character string
Test Result
NOTE
You can set this parameter to
only IPv4 UDP Response. If
you set this parameter to No
Response, the test result will
time out.
Issue 03 (2012-11-30)
Alarm Name
Description
MPLS_TUNNEL_B
DI
MPLS_TUNNEL_E
xcess
MPLS_TUNNEL_F
DI
MPLS_TUNNEL_L
OCV
MPLS_TUNNEL_
MISMATCH
263
Issue 03 (2012-11-30)
7 MPLS OAM
Alarm Name
Description
MPLS_TUNNEL_
MISMERGE
MPLS_TUNNEL_S
D
MPLS_TUNNEL_S
F
MPLS_TUNNEL_U
NKNOWN
MPLS_PW_LOCV
MPLS_PW_Excess
MPLS_PW_BDI
MPLS_PW_MISM
ATCH
MPLS_PW_MISME
RGE
MPLS_PW_SD
MPLS_PW_SF
264
7 MPLS OAM
Alarm Name
Description
MPLS_PW_UNKN
OWN
Issue 03 (2012-11-30)
Description
MPLS_TUNNEL_L
S
MPLS_TUNNEL_S
LS
MPLS_TUNNEL_C
SLS
MPLS_TUNNEL_U
AS
MPLS_PW_LS
MPLS_PW_SLS
MPLS_PW_CSLS
MPLS_PW_UAS
265
8 HQoS
HQoS
266
8 HQoS
You can configure the HQoS to provide dedicated bandwidths, decrease the packet loss ratio,
and lower the packet transmission delay and delay jitter.
8.10 Configuration Example
This topic considers the QoS configuration for the Ethernet service on the OptiX OSN 3500 as
an example. You can configure hierarchical QoS policies and apply these policies to the specified
Ethernet services to ensure the QoS.
8.11 Troubleshooting
Following a certain troubleshooting flow facilitates your fault locating and rectification. This
topic considers an HQoS fault on the OptiX OSN 3500 equipment as an example to describe
how to handle an HQoS fault, helping you handle HQoS faults on other types of equipment.
8.12 Parameter Description: HQoS
This topic describes the parameters that are related to the service configuration.
8.13 Relevant Alarms and Performance Events
This topic describes the alarms and performance events that are related to this feature.
Issue 03 (2012-11-30)
267
8 HQoS
8.1 Introduction
With IP technology and transmission technology continues to evolve, the traditional business
model of the broadband service gradually began to be exposed, the most prominent problem is
the operator's CAPEX (Capital Expenditure) and OPEX (Operational Expenditure) growth rate
much higher than the growth rate of users, and average revenue per user APRU (Average
Revenue Per User) continued to decline, HQoS can be achieved through user-based
differentiated services, for operators to bring more value.
Definition
HQoS is a QoS technology that controls user traffic and performs priority-based traffic
scheduling at the same time. In addition, HQoS provides a perfect traffic measurement function.
Thus, a network administrator can monitor the bandwidth that each service occupies and can
allocate an appropriate bandwidth to each service based on traffic analysis.
Purpose
HQoS is developed based on the traditional QoS scheme. The equipment with the HQoS function
provides users or services with services at differentiated quality levels. The features of the HQoS
function are as follows:
l
Delay: indicates the time elapsed after a service is transmitted at a reference point and
before the service is received at another reference point.
Jitter: indicates the difference between the time points when packets that traverse the same
route arrive at the user receive end.
Packet loss ratio: indicates the maximum ratio of the discarded packets to the total number
of transmitted packets. Packet discarding generally results from network congestion.
Issue 03 (2012-11-30)
268
8 HQoS
Typical
Service
Delay
Jitter
Packet Loss
Control
information
Ethernet
protocol
packet
Packet Loss
Packet Loss
Packet Loss
Packet Loss
Packet Loss
Packet Loss
Conversatio
nal service
and signaling
service
From top to
downwards,
service
priorities are
in a
descending
order.
Ethernet
OAM packet
VoIP
Videophone
Interactive
game
Streaming
service
VOD
Not sensitive
Packet Loss
Not sensitive
Interactive
service
Web page
browsing
Not sensitive
Not sensitive
Packet Loss
Background
service
Email/Film/
MP3
downloading
Not sensitive
Not sensitive
Packet Loss
FTP service
8.2.2 DiffServ
As an end-to-end QoS control model, the DiffServ can be simply realized and easily extended.
Figure 8-1 shows the application of the DiffServ model.
Figure 8-1 Networking diagram of the DiffServ model
DS node
DS domain
Non-DS node
DS node
DS node
Non-DS node
The DiffServ (DS) domain consists of a group of network nodes (DS nodes) that provide the
same service policy and realize the same per-hop behavior (PHB).
Issue 03 (2012-11-30)
269
8 HQoS
The DS nodes are of two types, namely, DS edge nodes and internal DS nodes. The DS edge
node classifies the traffic that enters the DS domain. The DS edge node marks different PHB
service levels according to different types of service traffic. The internal DS node controls the
flow based on the PHB service levels.
As the node in the DS domain, the OptiX OSN equipment uses the following technologies to
perform the end-to-end QoS control.
The DiffServ is a flow management scheme based on the service type and provides the
management at a large granularity.
The DiffServ provides the architecture to allow service classification and differentiated
treatments for different types of services.
The DiffServ is related to the classification and marking mechanisms.
PHB
Table 8-2 provides the service quality that corresponds to each PHB service level.
Table 8-2 PHB service level and the corresponding PHB service quality
PHB Service Level
BE
AF1
These service levels allow the service traffic to exceed the specified
range. These service levels ensure the forwarding quality of the traffic
within the specified range and downgrade the forwarding quality of
the traffic beyond the specified range. The traffic beyond the specified
range is not simply discarded.
These PHB service levels are applicable to the transmission of
multimedia services.
Each AF level is further classified into three discarding priorities
(colors). For example, the AF1 level can be further classified into the
following discarding priorities:
l AF11: corresponds to the green priority. The traffic of this priority
can pass normally.
l AF12: corresponds to the yellow priority. When a network
congestion occurs, the packet of this priority is discarded.
AF2
AF3
Issue 03 (2012-11-30)
270
8 HQoS
AF4
EF
These service levels require that the rate of the traffic sent from any
DS node should not be less than the specified rate in any conditions.
These service levels simulate the forwarding effect of a virtual leased
line. In this manner, the forwarding service of low packet loss ratio,
low delay, and high bandwidth can be provided. The PHB service
levels are applicable to video services and VoIP services.
CS6
CS7
Table 8-3 lists the recommended mapping relation between the priority of the ingress packet
and the PHB service level.
Table 8-3 Mapping relation between the priority of the ingress packet and the PHB service level
Issue 03 (2012-11-30)
VLAN
Priority
DEISVLAN
Priority
DSCP
(Decimal)
MPLS EXP
Priority
PHB Service
Level
0, 8
BE
8, 10
AF11
12
AF12
14
AF13
16, 18
AF21
20
AF22
10
22
AF23
24, 26
AF31
28
AF32
11
30
AF33
32, 34
AF41
36
AF42
271
8 HQoS
VLAN
Priority
DEISVLAN
Priority
DSCP
(Decimal)
MPLS EXP
Priority
PHB Service
Level
12
38
AF43
5, 13
40, 46
EF
6, 14
48
CS6
7, 15
56
CS7
NOTE
In the case of the DSCP priority, if the DSCP value is not within the range specified in the previous table,
the PHB service level is BE.
Table 8-4 lists the recommended mapping relation between the priority of the egress packet and
the PHB service level.
Table 8-4 Mapping relation between the priority of the egress packet and the PHB service level
Issue 03 (2012-11-30)
PHB Service
Level
VLAN
Priority
DEISVLAN
Priority
DSCP
(Decimal)
MPLS EXP
Priority
BE
0,8
AF11
AF12
12
AF13
14
AF21
16
AF22
20
AF23
10
22
AF31
24
AF32
28
AF33
11
30
AF41
32
AF42
36
AF43
12
38
EF
40
CS6
48
CS7
56
272
8 HQoS
Simple traffic classification is generally performed on the DS interior node. The same
simple traffic classification rule is applied to all the nodes in the DS domain.
The OptiX OSN equipment allows access of Ethernet packets, IP packets, and MPLS
packets. In addition, the OptiX OSN equipment allows mapping between the VLAN
priority and the PHB service level, between the IP DSCP and the PHB service level, and
between the MPLS EXP and the PHB service level.
Complex traffic classification is usually applied to a DS domain edge node on the network
traffic flow to do fine control, mark the priority information carried by the packets, the
nodes within a DS domain to do as a simple traffic classification.
The OptiX OSN 3500/7500/7500 II equipment supports complex traffic classification for
Ethernet packets and IP packets, providing users with a more refined and flexible traffic
classification scheme.
The OptiX OSN 1500 equipment supports complex traffic classification based on VLAN
IDs only for Ethernet packets.
273
8 HQoS
NOTE
In the case of queues of the lowest priorities (that is, queues for providing BE services), the HQoS is not
ensured. When a network congestion occurs, the OptiX OSN equipment does not schedule queues that
provide the BE service level.
SP Scheduling Algorithm
Figure 8-2 illustrates the SP scheduling algorithm.
Figure 8-2 SP queues
Packets to be
transmitted through Classification
the interface
Queue
Priority
Queue 7
Highest
Queue 6
High
Queue 5
Low
Packets
transmitted out
of the interface
Egress queue
scheduling
...
Queue 0
Lowest
The SP scheme is used to schedule queues of higher priorities (that is, queues for providing CS7, CS6, and
EF services).
Issue 03 (2012-11-30)
274
8 HQoS
Queue
Packets
transmitted out
of the interface
Weight
Queue 3
50
Queue 2
30
Queue 1
10
Queue 0
10
Egress queue
scheduling
When the WFQ scheme is used, queues are scheduled in a fair manner according to the weight
allocated to each queue. Generally, more weights and bandwidths are allocated to the queues of
higher priorities, and less weights and bandwidths are allocated to queues of lower priorities.
This queue scheduling scheme ensures the packets in queues of higher priorities are forwarded
with little delay. In addition, this scheme ensures that the packets in queues of lower priorities
can be processed accordingly.
NOTE
The WFQ scheme is used to schedule queues of higher priorities (that is, queues for providing AF4, AF3,
AF2, and AF1 services).
When the rate of packets is equal to or lower than the preset committed information rate
(CIR), these packets are marked green and pass the CAR policing. These packets are always
forwarded first in network congestion.
When the rate of packets exceeds the preset peak information rate (PIR), these packets
whose rate is higher than the PIR are marked red and directly discarded.
When the rate of packets is higher than the CIR but is lower than the PIR, the packets whose
rate is higher than the CIR can pass the restriction but are marked yellow. Yellow packets
can be set to "discard", "pass", or "remark". If packets are set to "remark", these packets
are mapped to another specified queue with a certain priority (that is, the priority of these
packets are changed) and then forwarded.
When the rate of packets that pass the CAR restriction is equal to or lower than the CIR in
a certain period, certain packets can burst and these packets are always forwarded first in
Issue 03 (2012-11-30)
275
8 HQoS
network congestion. The maximum burst traffic is determined by the committed burst size
(CBS).
l
When the rate of packets that pass the CAR restriction is higher than the CIR but is equal
to or lower than the PIR, certain packets can burst and these packets are marked yellow.
The maximum burst traffic is determined by the peak burst size (PBS).
Figure 8-4 shows the traffic change after the CAR processing. The packets marked red are
directly discarded, and the packets marked yellow and green pass the CAR policing. In addition,
the packets marked yellow are processed according to the preset value.
Figure 8-4 CAR processing
The OptiX OSN equipment supports the tail drop policy and the WRED policy. In addition, the
OptiX OSN equipment supports the configuration of the discarding starting point and discarding
rate of the WRED.
276
8 HQoS
8.3 Specifications
This section describes the capacity of the OptiX OSN equipment for supporting HQoS.
Table 8-5 provides specifications associated with HQoS.
Table 8-5 Specifications associated with HQoS
Item
DiffServ
Specifications
Maximum number of
supported DiffServ (DS)
domains
PHBs
CS7
CS6
EF
AF43, AF42, AF41
AF33, AF32, AF31
AF23, AF22, AF21
AF13, AF12, AF11
BE
Issue 03 (2012-11-30)
Ethernet port
277
Item
8 HQoS
Specifications
Types of trusted packet
priorities
CVLAN priority
SVLAN priority
DSCP values
MPLS EXP values
NOTE
l A QinQ-based NNI port only trusts
packets carrying DSCP values or SVLAN priorities.
l An MPLS-based NNI port only trusts
packets carrying MPLS EXP values.
Complex
traffic
classification
Application point
Ingress port
Based on source IP
Based on destination IP
Based on source MAC
Based on destination MAC
Based on protocol type
Based on source port number
Based on destination port number
Based on ICMP
Based on DSCP value
Based on IP-Precedence
Based on CVLAN ID
Based on CVLAN priority
Based on SVLAN ID
Based on SVLAN priority
Based on DEI
CAR
Queue
scheduling
SP
WFQ
SP+WFQ
Issue 03 (2012-11-30)
278
8 HQoS
8.5 Availability
The HQoS function requires the support of the applicable equipment and boards.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
279
8 HQoS
Hardware Support
Issue 03 (2012-11-30)
Boar
d
Type
Serv
ice
WR
ED
Poli
cy
Por
t
W
RE
D
Pol
icy
WF
Q
Pol
icy
P
W
Pol
icy
Qi
nQ
Pol
icy
VU
NI
Ing
res
s
Pol
icy
VU
NI
Egr
ess
Pol
icy
Por
t
Pol
icy
Applicable
Version
Applicable
Equipment
Q1PE
GS2
Supp
orted
Sup
por
ted
Sup
por
ted
Not
sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V100R009C
03 and later
OptiX OSN
1500A/
1500B
R1PE
FS8
Supp
orted
Sup
por
ted
Sup
por
ted
Not
sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V100R009C
03 and later
OptiX OSN
1500A/
1500B
R1PE
GS1
Supp
orted
Sup
por
ted
Sup
por
ted
Not
sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V100R009C
03 and later
OptiX OSN
1500A/
1500B
R1PE
F4F
Supp
orted
Sup
por
ted
Sup
por
ted
Not
sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V100R009C
03 and later
OptiX OSN
1500A/
1500B
N1PE
G16
Supp
orted
Not
sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V100R009C
03 and later
OptiX OSN
3500
N1PE
X1
Supp
orted
Not
sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V100R009C
03 and later
OptiX OSN
3500
N1PE
TF8
Supp
orted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V100R009C
03 and later
OptiX OSN
3500/7500
N1PE
G8
Supp
orted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V200R011C
00 and later
OptiX OSN
3500/7500
N2PE
X1
Supp
orted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V200R011C
00 and later
OptiX OSN
3500/7500
N1PE
X2
Supp
orted
Opt
iX
OS
N
350
0:
Not
sup
por
ted
Opt
iX
OS
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V200R011C
00 and later
OptiX OSN
3500/7500
280
8 HQoS
Boar
d
Type
Serv
ice
WR
ED
Poli
cy
Por
t
W
RE
D
Pol
icy
WF
Q
Pol
icy
P
W
Pol
icy
Qi
nQ
Pol
icy
VU
NI
Ing
res
s
Pol
icy
VU
NI
Egr
ess
Pol
icy
Por
t
Pol
icy
Applicable
Version
Applicable
Equipment
N1PE
FF8
Supp
orted
N
750
0:
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V200R011C
00 and later
OptiX OSN
3500/7500
TNN1
EX2
Supp
orted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V200R011C
01 and later
OptiX OSN
7500 II
TNN1
EG8
Supp
orted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V200R011C
01 and later
OptiX OSN
7500 II
TNN1
ETM
C
Supp
orted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V200R011C
01 and later
OptiX OSN
7500 II
TNN1
EFF8
Supp
orted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
Sup
por
ted
V200R011C
02 and later
OptiX OSN
7500 II
Issue 03 (2012-11-30)
Applicable
Object
Remarks
HQoS
DiffServ
281
Applicable
Object
8 HQoS
Remarks
282
Applicable
Object
Complex
traffic
classification
8 HQoS
Remarks
Issue 03 (2012-11-30)
283
8 HQoS
Applicable
Object
Remarks
CAR
Issue 03 (2012-11-30)
284
8 HQoS
Maintenance Principles
None.
8.7 Principles
You should be familiar with the realization principle of the HQoS before configuring the HQoS
function.
...
Tokens
Classification
Token bucket
Packets are
discarded
Issue 03 (2012-11-30)
285
8 HQoS
Packets are classified according to the preset matching rules: The packets whose traffic
performance is not defined are directly transmitted without being processed in the token bucket;
the packets that require traffic control enter the token bucket for processing.
The token bucket can be considered as a container with a certain capacity for storing tokens.
Tokens are placed into the bucket at a rate specified by the user. The user also sets the capacity
of the token bucket, so that no more tokens can be placed into the bucket when the number of
tokens is higher than the capacity of the bucket.
When packets are processed in the token bucket, the packets can pass and be transmitted out if
the token bucket has sufficient tokens to transmit the packets. In addition, the number of tokens
in the bucket decreases according to the length of the transmitted packets. When the number of
tokens in the bucket is so small that no more packets can be transmitted, the remaining packets
are discarded or marked. In this manner, traffic of certain packets is controlled.
When the token bucket has sufficient tokens, the packets that are represented by these tokens
can be transmitted, which allows transmission of the burst data in some cases. When no tokens
are available in the token bucket, packets cannot be transmitted until new tokens are generated
in the bucket. This method ensures that the traffic of packets is equal to or less than the rate at
which tokens are generated, and traffic is controlled.
The Ethernet processing board that supports QoS assesses traffic by using the single rate three
color marker (srTCM, which complies with RFC 2697) algorithm or the two rate three color
marker (trTCM, which complies with RFC 2698) algorithm. Packets are marked green, yellow,
or red according to the assessment results and then are allocated with different priorities if
discarding is required.
The srTCM algorithm is defined for burst packet size. The trTCM algorithm, however, is defined
for the burst of packet rate.
NOTE
srTCM Algorithm
Figure 8-6 illustrates the working principle of the srTCM algorithm.
Figure 8-6 srTCM algorithm
...
Classification
Tc
Te
Issue 03 (2012-11-30)
286
8 HQoS
Parameters: committed information rate (CIR), committed burst size (CBS), and excess
burst size (EBS)
Tokens enter the two token buckets at the CIR. Every token represents a number of bytes. If the
packets that arrive at the buckets do not exceed the CBS, they are marked green. If the packets
that arrive at the buckets exceed the CBS but do not exceed the EBS, they are marked yellow.
If the packets that arrive at the buckets exceed the EBS, they are marked red. Red packets are
directly discarded; if network congestion occurs, yellow packets are discarded to ensure passing
of green packets. If the token buckets are full of tokens because no packets need to be transmitted
in a long time, a certain number of packets can burst. The total number of bytes contained in
these burst packets is equal to that represented by the tokens in the token buckets. That is, the
allowed number of burst bytes (namely, CBS) is determined by the sizes of the token buckets
(that is, the total quantity of tokens).
The CIR and CBS parameters are set by users.
trTCM Algorithm
Figure 8-7 illustrates the working principle of the trTCM algorithm.
Figure 8-7 trTCM algorithm
...
...
CIR
PIR
Classification
Tp
Tc
Parameters: CIR, peak information rate (PIR), CBS, and peak burst size (PBS)
Tokens enter the two token buckets at the PIR and CIR respectively. Each token represents a
number of bytes. When the rate of the packet exceeds the PIR, the packet is marked red. When
the rate of the packet is between the PIR and the CIR, the packet is marked yellow. When the
rate of the packet does not exceed the CIR, the packet is marked green. Red packets are discarded
directly; if network congestion occurs, yellow packets are discarded to ensure passing of green
Issue 03 (2012-11-30)
287
8 HQoS
packets. The allowed number of burst bytes (CBS and PBS) is determined by the size of the
token bucket.
The values of the CIR, CBS, PIR, and PBS can be set by users.
There are two coloring modes: color blind and color sensitive. During the coloring process, the
current packet color is considered in color sensitive mode but not considered in color blind mode.
Based on a port, service, PW, or QinQ link, the hierarchical scheduling mechanism
implements traffic scheduling provides more refined QoS control granularity.
Based on a port, service, PW, or QinQ link, the hierarchical flow control mechanism
implements more overall quality of services.
Configurable WFQ and WRED policies allow more flexible QoS control.
NOTE
Implementation of HQoS control on the OptiX OSN 1500 equipment is simpler than that on the OptiX
OSN 3500/7500/7500 II equipment. Figure 8-9 shows the HQoS function points provided by the OptiX
OSN 1500 equipment and corresponding QoS processing.
Figure 8-8 shows the HQoS function points provided by the OptiX OSN 3500/7500/7500 II
equipment and corresponding QoS processing.
Figure 8-8 HQoS function points provided by the OptiX OSN 3500/7500/7500 II equipment
and corresponding QoS processing
Network side
Access side
QoS function
point
QoS
processing
Port
V-UNI
Apply the PW
policy/Control
the PW
bandwidth
Apply V-UNI
ingress traffic
classification
Apply
the port
policy
PW/QinQ
Control
the V-UNI
group
bandwidth
Apply the
QinQ policy
Tunnel
Control
the tunnel
bandwidth
Port
Apply
the port
policy
Service flow
Figure 8-9 shows the HQoS function points provided by the OptiX OSN 1500 equipment and
corresponding QoS processing.
Issue 03 (2012-11-30)
288
8 HQoS
Figure 8-9 HQoS function points provided by the OptiX OSN 1500 equipment and
corresponding QoS processing
Network side
Access side
QoS function
point
V-UNI
Port
Apply V-UNI
ingress traffic
classification
QoS
processing
Apply
the port
policy
QinQ
Apply the
QinQ ingress
bandwidth
policy
Tunnel
Control
the tunnel
bandwidth
Port
Apply
the port
policy
Apply the
V-UNI
egress
policy
Service flow
This topic considers a network comprising the OptiX OSN 3500 /7500/7500 II as an example. HQoS
implemented by the OptiX OSN 1500 is simpler. Learn more about the HQoS function points on the OptiX
OSN 1500 and corresponding QoS process in 8.7.2 QoS Model.
The equipment that functions as a DS edge node provides these hierarchical QoS control
functions for Ethernet traffic:
l
The OptiX OSN equipment that functions as a DS interior node provides the following
hierarchical QoS control functions for the Ethernet traffic:
l
Issue 03 (2012-11-30)
289
8 HQoS
Port
Network side
PW Tunnel Port
V-UNI V-UNI
egress group
DiffServ domain
Tunnel
Ethernet
switch
OSN
(Edge DS node)
OSN
(Internal DS node)
OSN
(Edge DS node)
Core
switch
Issue 03 (2012-11-30)
290
8 HQoS
Figure 8-11 QoS configuration flow for the OptiX OSN 3500/7500/7500 II equipment that
functions as the DiffServ edge node
Required
Start
Optional
Configure the DiffServ
domain
Configure the
port WRED
policy
Configure the
port policy
Configure the
WFQ policy
Configure the
V-UNI ingress
policy
Configure the
service WRED
policy
Configure the
V-UNI egress
policy
Configure the
PW policy
Configure the
QinQ policy
End
Issue 03 (2012-11-30)
291
8 HQoS
Figure 8-12 QoS configuration flow for the OptiX OSN 1500 equipment that functions as the
DiffServ edge node
Start
Required
Optional
Configure the
port WRED
policy
Configure the
port policy
Configure the
WFQ policy
Configure the
V-UNI ingress/
egress policy
Configure the
CAR policy
Configure the
QinQ policy
Apply
V-UNI
ingress/
Apply
the the
V-UNI
ingress/egress
egress pocliy/QinQ
policy
pocliy
Figure 8-13 shows the QoS configuration flow for the OptiX OSN equipment that functions as
the internal DiffServ node.
Issue 03 (2012-11-30)
292
8 HQoS
Figure 8-13 QoS configuration flow for the OptiX OSN equipment that functions as the internal
DiffServ node
Required
Start
Optional
Configure the DiffServ domain
End
Prerequisites
l
Background Information
As a node in the DiffServ domain, the equipment supports the creation of the DiffServ domain
based on the physical port. Each set of the equipment supports a maximum of eight DiffServ
domains.
A default DiffServ domain exists on the equipment. Before other DiffServ domains are created,
all the ports belong to this default DiffServ domain.
Create the DiffServ domain as follows:
l
Create the mapping relations in the DiffServ domain. The mapping relations include the
mapping relation between the priorities of user packets in both the ingress and egress
directions, and the PHB service levels.
Configure the physical ports that use the mapping relations. That is, add the physical ports
to the domain.
8.2.2 DiffServ provides the service quality that corresponds to each PHB service level.
Issue 03 (2012-11-30)
293
8 HQoS
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > QoS
Management > DiffServ Domain Management > DiffServ Domain Management from the
Function Tree.
Step 2 Click Create. When the Create DS Mapping Relation dialog box is displayed, set the following
parameters:
l Mapping relation ID, mapping relation name, and package type
l Ingress mapping relation and egress mapping relation (When no configuration operation is
performed, use the mapping relation recommended by the system.)
NOTE
By default, the equipment performs the mapping according to the CVLAN. If the mapping according
to the DEISVLAN is required, enable SVLAN DEI in the SVLAN DEI Used Flag tab page.
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > QoS
Management > Policy Management from the Function Tree.
Step 2 Select Port WRED Policy and click New. When the Create Port WRED Policy dialog box is
displayed, set the following parameters:
l Policy ID and policy name
l Discarding policies for green, yellow, and red packets
The discarding policy includes the lower threshold for discarding, upper threshold for
discarding, and discarding policy.
NOTE
Currently, only the default port WRED policy can be created and a new port WRED policy cannot be
created.
294
8 HQoS
congestion. A service WRED policy specifies the discarding threshold and discarding
probability for packets of different colors and is applicable to the V-UNI ingress policy, V-UNI
egress policy, PW policy, and QinQ policy.
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > QoS Management > Policy Management > Service WRED Policy from the
Function Tree.
Step 2 Click New. In the Create Service WRED Policy dialog box that is displayed, set the following
parameters:
l Policy ID and policy name
l Discarding policies for green, yellow, and red packets. The discarding policy specifies the
lower threshold for discarding packets, upper threshold for discarding packets, and discarding
probability.
Step 3 Click OK.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Precautions
The default WFQ scheduling policy (policy ID: 1; policy name: WFQ Default Scheduling)
cannot be created, modified, or deleted.
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > QoS
Management > Policy Management > WFQ Scheduling Policy from the Function Tree.
Step 2 Click New. When the Create WFQ Policy dialog box is displayed, set the following parameters:
l Policy ID and name
l AF scheduling weight
NOTE
The sum of AF1-AF4 scheduling weights should not be more than 100.
295
8 HQoS
Prerequisites
You must be an NM user with NE operator authority or higher.
Background Information
When you configure the port policy, set the following parameters:
l
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > QoS
Management > Policy Management > Port Policy from the Function Tree.
Step 2 Click New. When the Create Port Policy dialog box is displayed, set the following parameters:
l Policy ID and policy name
l CoS parameters
l Applied WFQ scheduling policy
. When the WFQ Schedule Policy dialog box is displayed, select the applied
Click
WFQ scheduling policy.
NOTE
l If the ACL filtering is required for the accessed traffic, click Traffic Classification
Configuration tab, click New. When the Create Traffic Classification dialog box is
displayed, set the following parameters:
Traffic classification ID
ACL action
Click Add to configure the traffic classification rule.
Click OK.
Step 3 In the Create Port Policy dialog box, click OK.
Step 4 Select the port policy that is created. Click the Applied Object tab, and then click Modify.
Step 5 When the Configure Port dialog box is displayed, specify the port that uses the policy. Click
OK.
----End
Issue 03 (2012-11-30)
296
8 HQoS
Related Information
Based on the port policy that is created, you can create a new port policy quickly by using the
copy function. Perform the following procedures:
1.
In the Port Policy tab page, click Copy. When the Duplicate QoS Policy is displayed,
select the policy to be copied and enter the name of the new policy. Click OK.
2.
Modify the parameters of the new policy to ensure that the new policy meets the QoS
requirement.
NOTE
When the copy function is used, the application policy information of the port cannot be duplicated.
Prerequisites
l
The port WRED policy or the WFQ scheduling policy must be configured if the WRED
policy or the WFQ scheduling policy is required by the port policy.
Background Information
When you configure the port policy, configure the following parameters:
l
Parameters for different CoSs and packet discarding policies. This provides ensured QoS
for different CoSs.
Procedure
Step 1 In the NE Explorer, select the NE, and choose Configuration > Packet Configuration > QoS
Management > Policy Management > Port Policy from the Function Tree.
Step 2 Click New. In the Create Port Policy dialog box that is displayed, set Policy Name and Policy
ID.
. In the WFQ Scheduling Policy dialog box that is displayed, select one policy
Step 3 Click
to be applied. Click OK.
NOTE
By default, the weights of the queues AF1-AF4 in the WFQ scheduling policy are equal. Each queue is
allocated 25%. The WFQ scheduling policy can be preset for a port.
Step 4 Set the Bandwidth Limit parameter of the CoS that requires the application of the bandwidth
limit to Enabled, and set the parameters of CIR(kbit/s), PIR(kbit/s), CBS(byte), PBS(byte),
and Tail Drop Threshold(%).
Step 5 Double-click the Port WRED Policy parameter of the CoS that requires the application of the
port WRED policy. In the Port WRED Policy dialog box that is displayed, select one policy
that needs to be applied. Click OK.
Issue 03 (2012-11-30)
297
8 HQoS
. Click OK.
----End
Related Information
Based on the port policy that is created, you can create a new port policy quickly by using the
copy function. Perform the following procedures:
1.
In the Port Policy page. Click Copy in the lower right corner. In the Duplicate QoS
Policy dialog box that is displayed, select the policy to be copied and enter the name of the
new policy. Click OK.
2.
Modify the parameters of the new policy to ensure that the new policy meets the QoS
requirement.
NOTE
When the copy function is used, the application policy information of the port cannot be duplicated.
Prerequisites
l
The WFQ scheduling policy that is used by the V-UNI ingress policy must be configured.
The WRED policy must be configured if the WRED packet discarding policy is required
by the V-UNI ingress policy to control the flow.
Background Information
When you configure the V-UNI ingress policy, configure the following parameters:
l
CoS parameters
This can specify the traffic control parameters and packet discarding policies for traffic of
different QoS levels.
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > QoS
Management > Policy Management > V-UNI Ingress Policy from the Function Tree.
Step 2 Click New. When the Create V-UNI Ingress Policy dialog box is displayed, set the following
parameters:
Issue 03 (2012-11-30)
298
8 HQoS
l The value of the CIR for the queues CS7, CS6, and EF should be equal to the value of the PIR for
these queues. The value of the CIR for the other queues should be not more than the value of the
PIR for the other queues.
l When you configure the value of the PIR for each queue, ensure that the maximum value is not
more than 100 times of the minimum value.
l The Tail Drop Threshold (256 bytes) and Service WRED Policy parameters cannot be configured
at the same time.
l Choose Traffic Classification Configuration > New. When the Create Traffic
Classification dialog box is displayed, set the following parameters:
Traffic classification ID
ACL rule (by clicking Add)
ACL action
CAR parameters (CIR, PIR)
Coloration mode
l Click OK to exit the dialog box.
Step 3 In the Create V-UNI Ingress Policy dialog box, click OK.
----End
Related Information
Based on the V-UNI ingress policy that is created, you can create a new V-UNI ingress policy
quickly by using the copy function. Perform the following procedures:
1.
In the V-UNI Ingress Policy tab page, click Copy. When the Duplicate QoS Policy is
displayed, select the policy to be copied and enter the name of the new policy. Click OK.
2.
Modify the parameters of the new policy to ensure that the new policy meets the QoS
requirement.
NOTE
When the copy function is used, the application policy information cannot be duplicated.
Prerequisites
l
Issue 03 (2012-11-30)
The CAR policy that is used by the V-UNI ingress policy must be configured.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
299
8 HQoS
Procedure
Step 1 In the NE Explorer, select the NE, and then choose Configuration > Packet Configuration >
QoS Management > Policy Management > V-UNI Ingress Policy from the Function Tree.
Step 2 Click New. In the Create V-UNI Ingress Policy dialog box that is displayed, set the Policy
ID and Policy Name parameters.
Step 3 Click New. In the Create Traffic Classification dialog box is displayed, set the Traffic
Classification ID parameter.
Step 4 Click Add, and then set Match Type, Match Value, Wildcard, and CoS.
. In the CAR Policy dialog box that is displayed, select the CAR policy that
Step 5 Click
needs to be applied, and then click OK.
Step 6 In the Create Traffic Classification dialog box, click OK. In the Create V-UNI Ingress
Policy dialog box, click OK.
----End
Related Information
Based on the V-UNI ingress policy that is created, you can create a new V-UNI ingress policy
quickly by using the copy function. Perform the following procedures:
1.
In the V-UNI Ingress Policy page, click Copy. In the Duplicate QoS Policy dialog box
that is displayed, select the policy to be copied and enter the name of the new policy. Click
OK.
2.
Modify the parameters of the new policy to ensure that the new policy meets the QoS
requirement.
NOTE
When the copy function is used, the application policy information cannot be duplicated.
Prerequisites
l
The WFQ scheduling policy applicable to the V-UNI egress policy must be configured.
The WRED policy must be configured if the WRED packet discarding policy is required
by the V-UNI egress policy to control the flow.
Background Information
When you configure a V-UNI egress policy, configure the following parameters:
l
Issue 03 (2012-11-30)
300
8 HQoS
Flow control parameters and packet discarding policies (such as tail drop or WRED) for
different CoSs
Procedure
Step 1 In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > QoS Management > Policy Management > V-UNI Egress Policy from the
Function Tree.
Step 2 Click New. In the Create V-UNI Egress Policy dialog box that is displayed, set the following
parameters:
l Policy ID and policy name
l Applied WFQ scheduling policy
. In the WFQ Schedule Policy dialog box that is displayed, select the applied
Click
WFQ scheduling policy.
NOTE
l The value of the CIR for queues CS7, CS6, and EF is equal to the value of the PIR for these queues.
The value of the CIR for the other queues is equal to or smaller than the value of the PIR for the
other queues.
l When you configure the value of the PIR for each queue, ensure that the maximum value is equal
to or smaller than 100 times of the minimum value.
l The Tail Drop Threshold (256 bytes) and Service WRED Policy parameters cannot be configured
at the same time.
Relation Operation
You can use the copy function to create a new V-UNI egress policy based on the created V-UNI
egress policy. The procedures are as follows:
1.
Click Copy. In the Duplicate QoS Policy dialog box that is displayed, select the policy to
be copied and then enter the new policy name. Click OK.
2.
Modify the parameters of the new policy, so that the new policy can meet the QoS
requirement.
NOTE
The copy function cannot copy the information about the applied policy.
301
8 HQoS
Prerequisites
l
The WFQ scheduling policy that is used by the PW policy must be configured.
The service WRED policy must be configured if the WRED packet discarding policy is
required by the PW policy to control the flow.
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > QoS
Management > Policy Management > PW Policy from the Function Tree.
Step 2 Click New. When the Create PW Policy dialog box is displayed, set the following parameters:
l Policy ID and policy name
l Applied WFQ scheduling policy
. When the WFQ Schedule Policy dialog box is displayed, select the applied
Click
WFQ scheduling policy.
NOTE
l The value of the CIR for the queues CS7, CS6, and EF should be equal to the value of the PIR for
these queues. The value of the CIR for the other queues should be not more than the value of the
PIR for the other queues.
l When you configure the value of the PIR for each queue, ensure that the maximum value is not
more than 100 times of the minimum value.
l The Tail Drop Threshold (256 bytes) and Service WRED Policy parameters cannot be configured
at the same time.
l The bandwidth and PW policy can be configured in the ingress direction of the PW other than in
the egress direction of the PW.
Related Information
Based on the PW policy that is created, you can create a new PW policy quickly by using the
copy function. Perform the following procedures:
1.
Click Copy. When the Duplicate QoS Policy is displayed, select the policy to be copied
and enter the name of the new policy. Click OK.
2.
Modify the parameters of the new policy to ensure that the new policy meets the QoS
requirement.
NOTE
When the copy function is used, the application policy information cannot be duplicated.
Issue 03 (2012-11-30)
302
8 HQoS
Prerequisites
l
The WFQ scheduling policy that is used by the QinQ policy must be configured.
The service WRED policy must be configured if the WRED packet discarding policy is
required by the QinQ policy to control the flow.
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > QoS
Management > Policy Management > QinQ Policy from the Function Tree.
Step 2 Click New. When the Create QinQ Policy dialog box is displayed, set the following parameters:
l Policy ID and policy name
l Applied WFQ scheduling policy
Click
. When the WFQ Schedule Policy dialog box is displayed, select the applied
WFQ scheduling policy.
NOTE
l The value of the CIR for the queues CS7, CS6, and EF should be equal to the value of the PIR for
these queues. The value of the CIR for the other queues should be not more than the value of the
PIR for the other queues.
l When you configure the value of the PIR for each queue, ensure that the maximum value is not
more than 100 times of the minimum value.
l The Tail Drop Threshold (256 bytes) and Service WRED Policy parameters cannot be configured
at the same time.
Related Information
Based on the QinQ policy that is created, you can create a new QinQ policy quickly by using
the copy function. Perform the following procedures:
1.
Click Copy. When the Duplicate QoS Policy is displayed, select the policy to be copied
and enter the name of the new policy. Click OK.
2.
Modify the parameters of the new policy to ensure that the new policy meets the QoS
requirement.
NOTE
When the copy function is used, the application policy information cannot be duplicated.
Issue 03 (2012-11-30)
303
8 HQoS
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > QoS
Management > Policy Management > CAR Policy from the Function Tree.
Step 2 Click New. In the Create CAR Policy dialog box that is displayed, set the following parameters:
l Policy ID and policy name
l CIR, PIR, CBS, and PBS
l Coloration mode and method for processing the marked packet
NOTE
The value of the PIR should not be less than the value of the CIR.
Mapping relation between the service priority and the PHB service level
Issue 03 (2012-11-30)
304
8 HQoS
NOTE
The bandwidths of the V-UNI group and the tunnel are configured when the service is created. For
details about how to configure the bandwidths of the V-UNI group and the tunnel, refer to the specific
topics. This topic focuses on the configuration and application of the QoS policy of each level.
The OptiX OSN equipment that functions as the internal DS node provides the following
hierarchical QoS control functions for the Ethernet traffic:
l
Mapping relation between the service priority and the PHB service level
l The mapping relation between the service priority of the internal DS node and the PHB service level
should be consistent with the mapping relation between the service priority of the DS edge node and
the PHB service level. It is recommended that all the equipment uses the default mapping relations.
l The QoS configuration for the internal DS node is simpler than the configuration for the DS edge node.
This example describes only the QoS configuration for the DS edge node.
Figure 8-14 shows the QoS configuration for the Ethernet service.
Figure 8-14 Networking diagram of the QoS configuration for the Ethernet service
Access side
V-UNI
ingress
Port
Network side
PW Tunnel Port
V-UNI V-UNI
egress group
DiffServ domain
Tunnel
Ethernet
switch
OSN
(Edge DS node)
OSN
(Internal DS node)
OSN
(Edge DS node)
Core
switch
The OptiX OSN equipment accesses the Ethernet services (VLAN = 20, including VoIP, IPTV,
and data services) of a user to the DiffServ domain through the FE interface. In addition, the
OptiX OSN equipment provides different QoS levels for different types of services of the user.
Table 8-6 provides the requirements for the QoS configuration for the Ethernet service.
Issue 03 (2012-11-30)
305
8 HQoS
Table 8-6 Requirements for the QoS configuration for the Ethernet service
V-UNI Ingress Policy
PW policy
l EF
l EF
l VoIP service
Network segment of the source
IP address: 129.5.1.0
PHB: EF
CIR: 2 Mbit/s
PIR: 2 Mbit/s
CIR: 2 Mbit/s
CIR: 2 Mbit/s
PIR: 2 Mbit/s
PIR: 2 Mbit/s
WRED: 1
WRED: 1
l AF4
l AF4
CIR: 6 Mbit/s
CIR: 6 Mbit/s
PIR: 16 Mbit/s
PIR: 16 Mbit/s
PHB: AF4
CIR: 6 Mbit/s
WRED: 1
WRED: 1
l IPTV service
Network segment of the source
IP address: 129.5.2.0
PIR: 16 Mbit/s
l BE
l BE
PIR: 16 Mbit/s
PIR: 16 Mbit/s
PHB: BE
WRED: 1
WRED: 1
l Data service
PIR: 16 Mbit/s
For details about the CoS
parameters, refer to the description
about the "V-UNI egress policy"
parameter.
Prerequisites
You must be familiar with the networking and data planning of the example.
Procedure
Step 1 Configure the domain.
This example uses the default mapping relation between the service priority and the PHB service
level. Hence, no configuration is required.
l Mapping Relation ID: 1
Issue 03 (2012-11-30)
306
8 HQoS
For the method for configuring the new mapping relation, see 8.9.2 Creating the DiffServ Domain.
EF
Bandwidth Limit
Enabled
CIR
2 Mbit/s
PIR
2 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
Coloration Mode
Color Blindness
Default value
AF4
Bandwidth Limit
Enabled
CIR
6 Mbit/s
PIR
16 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
Coloration Mode
Color Blindness
Packet Processing
Mode
Default value
307
8 HQoS
CoS
BE
Bandwidth Limit
Enabled
CIR
PIR
16 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
Coloration Mode
Color Blindness
Packet Processing
Mode
Default value
EF
Bandwidth Limit
Enabled
CIR
2 Mbit/s
PIR
2 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
Tail Drop
WRED
service-wred
CoS
AF4
Bandwidth Limit
Enabled
CIR
6 Mbit/s
PIR
16 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
Tail Drop
WRED
service-wred
CoS
BE
Bandwidth Limit
Enabled
CIR
PIR
16 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
Tail Drop
WRED
service-wred
NOTE
For the method for configuring the V-UNI ingress policy, see 8.9.8 Creating the V-UNI Ingress Policy
(OptiX OSN 3500/7500/7500 II).
Issue 03 (2012-11-30)
CoS
EF
Bandwidth Limit
Enabled
CIR
2 Mbit/s
PIR
2 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
308
8 HQoS
Tail Drop
WRED
service-wred
CoS
AF4
Bandwidth Limit
Enabled
CIR
6 Mbit/s
PIR
16 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
Tail Drop
WRED
service-wred
CoS
BE
Bandwidth Limit
Enabled
CIR
PIR
16 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
Tail Drop
WRED
service-wred
NOTE
For the method for configuring the V-UNI egress policy, see 8.9.10 Creating a V-UNI Egress Policy.
Issue 03 (2012-11-30)
CoS
EF
Bandwidth Limit
Enabled
CIR
2 Mbit/s
PIR
2 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
Tail Drop
WRED
service-wred
CoS
AF4
Bandwidth Limit
Enabled
CIR
6 Mbit/s
PIR
16 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
Tail Drop
WRED
service-wred
CoS
BE
Bandwidth Limit
Enabled
309
8 HQoS
CIR
PIR
16 Mbit/s
CBS
120000 bytes
PBS
10000000 bytes
Tail Drop
WRED
service-wred
NOTE
For the method for configuring the PW policy, see 8.9.11 Creating the PW Policy (OptiX OSN
3500/7500/7500 II).
If the port policy needs to be applied, see 8.9.6 Creating the Port Policy (OptiX OSN 3500/7500/7500
II).
For the method for configuring the tunnel, see Configuring an MPLS Tunnel.
Step 7 Configure the Ethernet service and apply the V-UNI ingress policy, V-UNI egress policy, and
PW policy.
Apply the following policies when configuring the Ethernet service:
l V-UNI ingress policy:
Policy ID: 1
Policy Name: V-UNI-Ingress
l V-UNI egress policy
Policy ID: 1
Policy Name: V-UNI-Egress
l PW policy
Policy ID: 1
Policy Name: PW-Policy
NOTE
For the method for configuring the Ethernet service, see Configuring E-Line Services, Configuring E-LAN
Services, or Configuring E-AGGR Services.
Before applying the V-UNI Ingress and PW polices, you need to set the service bandwidth.
----End
8.11 Troubleshooting
Following a certain troubleshooting flow facilitates your fault locating and rectification. This
topic considers an HQoS fault on the OptiX OSN 3500 equipment as an example to describe
how to handle an HQoS fault, helping you handle HQoS faults on other types of equipment.
Issue 03 (2012-11-30)
310
8 HQoS
Fault Symptom
The common HQoS fault symptoms are as follows:
l
Actual traffic volume exceeds the bandwidth configured for a service. As a result, traffic
congestion occurs.
Services pre-empt bandwidth from each other. As a result, packet loss or bit errors occur
in the service whose bandwidth is pre-empted.
The service with a lower priority pre-empts the bandwidth allocated to the service with a
higher priority. As a result, packet loss or bit errors occur in the service with a higher
priority.
The service processing board reports the BUS_ERR alarm and therefore the service
processing capability is affected.
Possible Causes
l
Cause 1: The service processing board reports the BUS_ERR alarm and therefore the
service processing capability is affected.
Cause 5: A certain board becomes faulty and as a result, the configuration data is not
successfully delivered to the board.
Troubleshooting Flow
Following a certain troubleshooting flow facilitates your fault locating and rectification.
Issue 03 (2012-11-30)
311
8 HQoS
Does the
BUS_ERR
alarm exist?
Yes
Reseat/Replace
the board
Yes
Configure
QoS policies
Yes
Configure
new policies
Yes
Increase the
bandwidth
No
No QoS policy
configured?
No
Is an incorrect
QoS policy
configured?
No
Is the
bandwidth
configured for a
tunnel or PW
very low?
No
Do hardware
alarms exist?
Yes
Clear the
alarms
No
Is the fault
rectified?
Yes
End
Procedure
Step 1 Cause 1: The service processing board reports the BUS_ERR alarm and therefore the service
processing capability is affected.
1.
2.
If yes, clear the alarm. For details on how to clear the alarm, see the Alarms and
Performance Events Reference.
Check whether the NE is configured with associated QoS policies, including the V-UNI
ingress policy, V-UNI egress policy, PW policy, and QinQ policy.
2.
312
1.
8 HQoS
Check whether the configured QoS policies are applicable to the services. If not, configure
new policies again.
Check whether the bandwidth configured for the tunnel or PW meets the traffic
requirements. If the configured bandwidth is very low, configure the bandwidth again.
Step 5 Cause 5: A certain board becomes faulty and as a result, the configuration data is not successfully
delivered to the board.
1.
Check whether hardware alarms such as HARD_BAD exist in the system. If yes, clear the
alarms. For details on how to clear the alarms, see the Alarms and Performance Events
Reference.
Step 6 If the fault persists, contact Huawei technical support engineers to handle the fault.
----End
Value
Description
Mapping Relation ID
1, 2 to 8
Issue 03 (2012-11-30)
313
8 HQoS
Field
Value
Description
32 bytes
CVLAN
0 to 7
SVLAN
0 to 7
DEISVLAN
0 to 15
IP DSCP
0 to 63
MPLS EXP
0 to 7
PHB
Issue 03 (2012-11-30)
314
8 HQoS
Field
Value
Description
Board
Available Port
Default: -
Issue 03 (2012-11-30)
315
8 HQoS
Value
Description
Policy ID
For example, 5
Automatically Assign
Checked, Unchecked
Policy Name
64 bytes
Packet Color
0-4095
Issue 03 (2012-11-30)
316
8 HQoS
Field
Value
Description
0-4095
1-100
Default: 100
Issue 03 (2012-11-30)
317
8 HQoS
Value
Description
Policy ID
For example, 5
Automatically Assign
Checked, Unchecked
Policy Name
64 bytes
1-100
Default: 25
Issue 03 (2012-11-30)
318
8 HQoS
Field
Value
Description
1-100
Default: 25
1-100
Default: 25
1-100
Default: 25
Issue 03 (2012-11-30)
319
8 HQoS
Value
Description
Policy ID
For example, 5
Automatically Assign
Checked, Unchecked
Policy Name
64 bytes
Traffic Classification
Bandwidth Sharing
Enabled, Disabled
Default: Disabled
Issue 03 (2012-11-30)
320
8 HQoS
Field
Value
Description
CoS
Issue 03 (2012-11-30)
Bandwidth Limit
Enabled, Disabled
CIR (kbit/s)
1000 to 10000000
PIR (kbit/s)
1000 to 10000000
321
8 HQoS
Field
Value
Description
CBS (byte)
2 to 64000
PBS (byte)
2 to 64000
0 to 100
Issue 03 (2012-11-30)
322
8 HQoS
Field
Value
Description
Board
Available Port
Selected Port
Traffic Classification ID
For example, 2
Permit, Deny
ACL Action
And, Or
Default: And
Issue 03 (2012-11-30)
323
8 HQoS
Field
Value
Description
Match Type
Match Value
Default: -
Wildcard
Default: -
Issue 03 (2012-11-30)
324
8 HQoS
Value
Description
Policy ID
For example, 5
Automatically Assign
Checked, Unchecked
Policy Name
64 bytes
Traffic Classification
Bandwidth Sharing
Enabled, Disabled
Default: Disabled
Issue 03 (2012-11-30)
325
8 HQoS
Field
Value
Description
CoS
Enable Bandwidth
Restriction
Issue 03 (2012-11-30)
Disabled, Enabled
326
8 HQoS
Field
Value
Description
CIR(kbit/s)
PIR(kbit/s)
CBS(byte)
64 to 131072
l In the CoS Configuration tab:
64 to 131072, 64 as the spacing
Default value: 64
l In the Traffic Classification
Configuration tab:
16000 to 10000000, 64 as the
spacing
PBS(byte)
64 to 16777216
l In the CoS Configuration tab:
64 to 16777216, 64 as the
spacing
Default value: 64
l In the Traffic Classification
Configuration tab:
16000 to 10000000, 64 as the
spacing
Tail Drop Threshold (256
bytes)
0 to 4095
Issue 03 (2012-11-30)
327
8 HQoS
Field
Value
Description
Coloring Mode
Color-Blind, Chromatic-Sensitive
Default: Color-Blind
Traffic Classification ID
Issue 03 (2012-11-30)
For example, 2
328
8 HQoS
Field
Value
Description
The Traffic
Classification Rule
parameter indicates the
rules for classifying
service packets.
Click A.10.1 Traffic
Classification Rule
(Policy Management)
for more information.
ACL Action
Permit, Deny
And, Or
Default: And
Issue 03 (2012-11-30)
Default: -
329
8 HQoS
Field
Value
Description
Match Value
Packet Color
Processing Mode
Wildcard
Default:
l Green, yellow: Pass
l Red: Discard
Click A.10.16
Processing Mode(VUNI Ingress Policy) for
more information.
Re-Mark CoS
Issue 03 (2012-11-30)
330
8 HQoS
Field
Value
Description
Re-Mark Color
Default: None
Service ID
Interface
V-UNI ID
For example, 1
Value
Description
Policy ID
For example, 5
Automatically Assign
Issue 03 (2012-11-30)
Checked, Unchecked
331
8 HQoS
Field
Value
Description
Policy Name
64 bytes
CoS
Issue 03 (2012-11-30)
Bandwidth Limit
Disabled, Enabled
CIR (kbit/s)
320 to 10000000
PIR (kbit/s)
320 to 10000000
332
8 HQoS
Field
Value
Description
CBS (byte)
64 to 131072
PBS (byte)
64 to 16777216
0 to 4095
Issue 03 (2012-11-30)
Service ID
Interface
333
8 HQoS
8.12.7 PW Policy
Configuring a PW policy is to specify the WFQ schedule policy for the traffic on the network
side and to specify different traffic control parameters and packet discarding policies (such as
tail drop or WRED) for different classes of services.
Table 8-13 lists the parameters for a PW policy.
Table 8-13 Parameters for a PW policy
Field
Value
Description
Policy ID
For example, 5
Automatically Assign
Checked, Unchecked
Policy Name
64 bytes
Policy ID-Policy
Name
For example, 1-WFQ
Default Scheduling
Issue 03 (2012-11-30)
334
8 HQoS
Field
Value
Description
CoS
Bandwidth Limit
Enabled, Disabled
CIR (kbit/s)
320 to 10000000
PIR (kbit/s)
320 to 10000000
CBS (byte)
64 to 131072
PBS (byte)
64 to 16777216
0 to 4095
Issue 03 (2012-11-30)
335
8 HQoS
Field
Value
Description
Policy ID-Policy
Name
Signal Type
Dynamic, Automatic
PW Type
PW Direction
Bidirectional,
Unidirectional
Issue 03 (2012-11-30)
336
8 HQoS
Field
Value
Description
Direction
Ingress, Egress
Value Range
Description
Policy ID
For example, 5
Automatically Assign
Issue 03 (2012-11-30)
Checked, Unchecked
337
8 HQoS
Field
Value Range
Description
Policy Name
A maximum of 64
bytes
CoS
Issue 03 (2012-11-30)
Enable Bandwidth
Restriction
Disabled, Enabled
CIR(kbit/s)
320-10000000
338
8 HQoS
Field
Value Range
Description
PIR(kbit/s)
320-10000000
CBS(byte)
64-131072
PBS(byte)
64-16777216
0-4095
Issue 03 (2012-11-30)
339
8 HQoS
Field
Value Range
Description
QinQ Link ID
1-4294967295
Physical Port ID
For example, 12
S-VLAN ID
1-4094
Ingress, Egress
Direction
Issue 03 (2012-11-30)
340
9 IGMP Snooping
IGMP Snooping
341
9 IGMP Snooping
Issue 03 (2012-11-30)
342
9 IGMP Snooping
If the IGMP Snooping function is disabled, the bridge broadcasts the received packets to
each host.
If the IGMP Snooping function is enabled, after receiving the multicast packet, the bridge
queries the multicast table in which the source port functions as the router port. If the
multicast group that matches the multicast address exists in the multicast table, the bridge
forwards the packet to this multicast group. If the multicast group that matches the multicast
address does not exist in the multicast table, the bridge discards the packet or broadcasts
the packet.
Internet/
Intranet
Internet/
Intranet
Source
NE1
Host
1
NE2
Host
3
Host
2
Group member
Host
Host
5
4
Group member
NE1
Host
1
Host
Host
3
2
Group member
Source
NE2
Host
Host
4
5
Group member
Multicast management
router
Equipment that supports the
IGMP Snooping function
Bridge
Multicast packets
Packets are forwarded within the range of each VLAN. Hence, the information security is
enhanced.
Issue 03 (2012-11-30)
343
9 IGMP Snooping
The communication protocol between multicast routers is used to obtain the multicast
routing information. This type of protocol contains the Protocol Independent MulticastDense Mode (PIM-DM), Protocol Independent Multicast-Sparse Mode (PIM-SM), and
Distance Vector Multicast Routing Protocol (DVMRP).
The protocol between multicast routers, hosts, and Layer 2 switches is used to forward
multicast packets according to the multicast routing information. This type of protocol
contains the IGMP, IGMP Snooping, IGMP Proxy, and Cisco Group Management Protocol
(CGMP). The IGMP is a Layer 3 multicast protocol, and the IGMP Snooping, IGMP Proxy,
and CGMP are Layer 2 multicast protocols.
9.2.2 IGMP
Contained in the TCP/IP suite, the IGMP protocol is used to manage members of the IP multicast
group. The IGMP protocol creates and maintains the member relations of the multicast group
between the host and its adjacent multicast router.
The host notifies the local router of joining a specific multicast group and accepting the
information from this multicast group through the IGMP protocol. The router periodically
queries whether a member of a specific group in the LAN is activated through the IGMP protocol
(that is, whether the member of a specific multicast group still exists in the network segment),
and thus collects and maintains the member relations of groups that are connected to the router.
Through this mechanism, the multicast router creates a table, which contains the ports of the
router and members of each specific group in the subnetworks corresponding to each port. After
receiving the packet of a specific group, the router forwards the packet only to the ports that
have the members of this group.
Issue 03 (2012-11-30)
344
9 IGMP Snooping
Router port
The router port refers to a port that faces the multicast router. The Ethernet data boards of
the equipment consider the port that receives the IGMP query packet as the router port.
Router ports are classified into the following two categories:
Dynamic router port: It refers to a port that can receive the IGMP query packet. The
dynamic router port is based on the packets transmitted between the router and the host
and is dynamically maintained. Each dynamical router port can enable a router port
aging timer. When the timer expires, this port becomes invalid and the multicast group
that is based on this port is deleted.
Static router port: It refers to a port that a user specifies by running the configuration
command. The static router port is not aged.
Member port
The multicast group member port refers to a port that faces the host of the group member.
The Layer 2 equipment forwards multicast service packets to the member ports. The
multicast group member ports, referred to as member ports for short, are classified into the
following two categories:
Dynamic member port: It refers to a port that can receive the IGMP report packet from
the host. The dynamic member port is based on the packets transmitted between the
router and the host and is dynamically maintained. Each dynamic member port is aged
after the maximum number of non-response times is reached.
Static member port: It refers to a port that is specified through the configuration
command of a user. The static member port is not aged.
Issue 03 (2012-11-30)
345
9 IGMP Snooping
346
9 IGMP Snooping
9.4 Availability
The IGMP Snooping function requires the support of the applicable equipment, boards, and
software.
Version Support
Applicable Equipment
Applicable Version
V100R009C03
T2000
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Applicable Board
Applicable Version
Applicable Equipment
N1PEG16
l V100R009C03
N1PEX1
l V100R009C03
N1PEX2
N1PETF8
l V100R009C03
Q1PEGS2
l V100R009C03
l V100R009C03
Issue 03 (2012-11-30)
347
9 IGMP Snooping
Applicable Board
Applicable Version
Applicable Equipment
R1PEF4F
l V100R009C03
R1PEFS8
TNN1EG8
TNN1ETMC
TNN1EFF8
Remarks
IGMP
Snooping
Issue 03 (2012-11-30)
Applicable
Object
Remarks
Method for
processing
unknown
multicast
packets and
multicast
aging time
348
9 IGMP Snooping
Maintenance Principles
None.
After the bridge receives the IGMP general query packet or the IGMP specific query packet,
it processes this packet as follows:
1.
The bridge checks whether the port that receives the packet is already learnt as the
router port.
2.
If this port is not learnt, the bridge records this port as the router port.
3.
If the received packet is the IGMP specific query packet and the port that receives this
packet is already recorded as the router port, the bridge broadcasts this packet in the
specific multicast group and starts the timer for the maximum query response time
and if the multicast group that is specified in this packet exists. Otherwise, the bridge
broadcasts this packet in the address domain or VLAN domain of the bridge.
After the bridge receives the IGMP report packet, it processes this packet as follows:
1.
The bridge checks whether the multicast record is already learnt in the address domain
or VLAN domain of the bridge.
2.
If this multicast record is not learnt and if the multicast group does not exist, the bridge
creates the multicast group and creates the mapping relations between the router ports,
MAC multicast addresses, and multicast group members by considering this port as
the multicast member port. If this multicast record is not learnt and this port is not
contained in the multicast member ports of the multicast group, the multicast group
adds this port as the multicast member port. If this multicast record is learnt, the bridge
re-sets the count of no-response times for this multicast member.
After the bridge receives the multicast packet, it processes this packet as follows:
1.
The bridge queries the multicast table that uses the source port as the router port.
2.
If the multicast group that matches the multicast address exists in the multicast table,
the bridge forwards the packet to this multicast group.
3.
If the multicast group that matches the multicast address does not exist in the multicast
table, the bridge discards the packet or broadcasts the packet in the VLAN domain.
The bridge performs the IGMP fast-leave function as follows. After the bridge receives the
IGMP fast-leave packet, it processes this packet as follows:
1.
The member port receives an IGMP fast-leave packet from the multicast group.
2.
When the data port fails to respond to the query within a certain period or fails to receive a query
packet within a certain period, the multicast member and dynamic router port will age. The
bridge renews the multicast members by aging the multicast member and dynamic router port.
l
Issue 03 (2012-11-30)
349
9 IGMP Snooping
1.
If the maximum query response time expires, the bridge adds one to the no-response
times of the multicast member.
2.
If the no-response times of a multicast member exceeds the threshold, the bridge
deletes this multicast member port.
3.
If a multicast group does not have any multicast member port, the bridge deletes this
multicast group.
Aging of the dynamic router port: The aging timer of a router port is started for each dynamic
router port. If no IGMP query packet is received in a certain period, the timer expires and
the router port becomes invalid. In addition, the multicast group that is based on this router
port is deleted.
Issue 03 (2012-11-30)
350
9 IGMP Snooping
Internet/
Intranet
Source
NE1
Host 1
NE2
Host 2
Group member
Host 3
Host 4
Group member
Host 5
Group member
Multicast packets
Prerequisites
l
Issue 03 (2012-11-30)
351
9 IGMP Snooping
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > Ethernet
Protocol Configuration > IGMP Snooping Configuration from the Function Tree.
Step 2 Click the Protocol Configuration tab, and then set the parameters according to the requirement.
After the setting, click Apply.
NOTE
Step 3 Select the multicast service that requires the setting of the quickly delete member port. The
multicast service is displayed in the lower pane.
Step 4 Set the parameters that are related to the quickly delete member port for the multicast service.
These parameters include VLAN ID, Port Type, and Port. You can modify the values doubleclicking these parameters.
NOTE
If you need to set a port that carries the multicast service as a quickly delete member port, the port must
have only one multicast service user. Otherwise, different users on the quickly delete port are mutually
affected when they receive multicast services.
Prerequisites
l
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > Ethernet
Protocol Configuration > IGMP Snooping Configuration from the Function Tree.
Step 2 Click the Route Management tab. Click New. When the Create Router Port dialog box is
displayed, set the required parameters and select the multicast router port to be added.
Issue 03 (2012-11-30)
352
9 IGMP Snooping
NOTE
Prerequisites
l
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > Ethernet
Protocol Configuration > IGMP Snooping Configuration from the Function Tree.
Step 2 Click the Route Member Port Management tab, and then click New.
Step 3 When the Create Member Port dialog box is displayed, set the required parameters.
NOTE
353
9 IGMP Snooping
Prerequisites
l
Procedure
Step 1 Select the NE in the NE Explorer. Choose Configuration > Ethernet Protocol
Configuration > IGMP Snooping Configuration from the Function Tree.
Step 2 Click the Packet Statistics tab, and set the parameters contained in the IGMP protocol packet
statistics.
NOTE
Step 3 Click Apply. The Operation Result dialog box is displayed, indicating that the operation is
successful. Click Close.
----End
Issue 03 (2012-11-30)
354
9 IGMP Snooping
3-N1PEG8-1
3-N1PEG8-4
3-N1PEG8-2
3-N1PEG8-3
host 1
host 2
group member
group member
host 3
group member
multicast packets
355
9 IGMP Snooping
Description
Enabled Protocol
Enabled
Unlimited
Table 9-2 provides the planning information about managing the IGMP Snooping routes.
Table 9-2 Planning information about managing the IGMP Snooping routes
Attribute
Description
Service ID
101
VLAN ID
30
Selected Port
3-N1PEG8-1(PORT-1)[30]
Table 9-3 provides the planning information about managing the IGMP Snooping route member
ports
Table 9-3 Planning information about managing the IGMP Snooping route member ports
Attribute
Description
Service ID
101
VLAN ID
30
01-00-5E-00-01-02
Selected Port
3-N1PEG8-2(PORT-2)[30], 3-N1PEG8-3
(PORT-3)[30], 3-N1PEG8-4(PORT-4)[30]
Table 9-4 provides the planning information about packet statistics on the IGMP Snooping
Table 9-4 Planning information about packet statistics on the IGMP Snooping
Issue 03 (2012-11-30)
Attribute
Description
Start
356
9 IGMP Snooping
Prerequisites
l
You must understand the networking, requirements and service planning of the example.
Procedure
Step 1 On the U2000, set the IGMP Snooping parameters. For details, see 9.8.1 Setting the IGMP
Snooping Parameters.
Attribute
Description
Enabled Protocol
Enabled
Unlimited
Step 2 On the U2000, configure the route management. For details, see 9.8.2 Configuring the Route
Management.
Attribute
Description
Service ID
101
VLAN ID
30
Selected Port
3-N1PEG8-1(PORT-1)[30]
Step 3 On the U2000, configure the route member port. For details, see 9.8.3 Configuring the Route
Member Port Management.
Attribute
Description
Service ID
101
VLAN ID
30
01-00-5E-00-01-02
Selected Port
3-N1PEG8-2(PORT-2)[30], 3-N1PEG8-3
(PORT-3)[30], 3-N1PEG8-4(PORT-4)[30]
Step 4 On the U2000, configure the packet statistics status. For details, see 9.8.4 Configuring the
Packet Statistics.
Issue 03 (2012-11-30)
357
9 IGMP Snooping
Attribute
Description
Start
----End
Relevant Alarms
None.
Value
Description
Service ID
Example: 5
Enabled Protocol
Disabled, Enabled
Default: Disabled
Router Port Aging (min)
1-120
Default: 8
Maximum times of No
Response from Multicast
Members
Issue 03 (2012-11-30)
1-4
Default: 3
358
9 IGMP Snooping
Field
Value
Description
Maximum Number of
Multicast Groups
Example: Unlimited
Maximum Number of
Multicast Group Members
Example: Unlimited
Example: 2
Example: 3
VLAN ID
1 to 4094
Default: 1
Port Type
V-UNI, V-NNI
Default: V-UNI
Example: 21-N1PETF8-2
(PORT-2)[55]
Port
Issue 03 (2012-11-30)
Field
Value
Description
Service ID
Example: 5
VLAN ID
1-4094
359
9 IGMP Snooping
Field
Value
Description
Port Type
V-UNI, V-NNI
Port
Example: 21-N1PETF8-2
(PORT-2)
Port Status
Dynamic, Static
Example: 2008-1-31
9:58:25:0
Example: 8
Available Port
Example: 21-N1PETF8-2
(PORT-2)[55]
All available.
Selected Port
Example: 21-N1PETF8-2
(PORT-2)[55]
All selected.
Issue 03 (2012-11-30)
Field
Value
Description
Service ID
Example: 5
VLAN ID
1-4094
Example:
01-00-5E-00-01-00
Static, Dynamic
Example: 2008-1-31
10:54:7:0
360
9 IGMP Snooping
Field
Value
Description
Port Type
V-UNI, V-NNI
Port
Example: 21-N1PETF8-2
(PORT-2)
Example: 3
Available Port
Example: 21-N1PETF8-2
(PORT-2)[55]
All available.
Selected Port
Example: 21-N1PETF8-2
(PORT-2)[55]
All selected.
Value
Description
Service ID
Example: 5
VLAN ID
1-4094
Port Type
V-UNI, V-NNI
Issue 03 (2012-11-30)
Port
Example: 1
Example: 5
361
Issue 03 (2012-11-30)
9 IGMP Snooping
Field
Value
Description
Example: 6
Example: 7
Example: 5
Example: 6
Example: 7
Example: 6
Unrecognized or
Unprocessed Packet Count
Example: 8
Example: 7
362
10 Link Aggregation
10
Link Aggregation
363
10 Link Aggregation
This section uses an example to describe how to configure a LAG to increase Ethernet link
bandwidth.
10.11 Configuration Example (LAG Configured to Provide Link Protection)
This section uses an example to describe how to configure a LAG to provide protection for
Ethernet links.
10.12 Verifying a LAG
This topic describes how to check whether a LAG works properly by verifying the basic functions
of the LAG.
10.13 Troubleshooting
Following a certain troubleshooting flow facilitates your fault locating and rectification.
10.14 Relevant Alarms and Performance Events
This section describes the alarms and performance events that are associated with the link
aggregation feature.
10.15 Relevant Parameters
This section describes parameters associated with LAG management.
Issue 03 (2012-11-30)
364
10 Link Aggregation
10.1 Introduction
As the Ethernet technology is widely applied in the metropolitan area network (MAN) and the
wide area network (WAN), carriers propose increasingly higher requirements on the bandwidth
and reliability of backbone links that use the Ethernet technology. Hardware upgrades can
increase Ethernet link bandwidth but also incur high expenditure. In addition, hardware upgrades
are less flexible than software upgrades. To increase bandwidth at a low expenditure and flexibly,
the link aggregation technology is developed.
Link aggregation has the following characteristics:
l
With no need for hardware upgrades, link aggregation binds several Ethernet ports as a
higher-bandwidth logical port.
The link backup mechanism of the link aggregation technology provides higher link
transmission reliability.
Link aggregation functions between adjacent NEs and is independent of the network
topology.
The logical link aggregating several physical links is called a link aggregation group (LAG).
NOTE
Link aggregation is also called port aggregation because each link corresponds to two specific ports at two
ends in Ethernet transmission.
As shown in Figure 10-1, two adjacent NEs are interconnected through three pairs of Ethernet
ports. Three physical Ethernet links are bound as a logical link, called a LAG.
Figure 10-1 LAG
10.2.1 LACP
Link Aggregation Control Protocol (LACP) functions to aggregate and deaggregate Ethernet
links dynamically.
Developed based on IEEE 802.3ad, LACP performs the following functions:
l
Issue 03 (2012-11-30)
LACP provides the data switching equipment with a standard negotiation mode to form an
aggregation link. After LACP is enabled, the system automatically aggregates multiple
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
365
10 Link Aggregation
links according to its configuration and enables the aggregation link to transmit and receive
data.
l
LACP allows both ends of the aggregated link to exchange Link Aggregation Control
Protocol Data Units (LACPDUs), and therefore helps in maintaining links statuses and
implements dynamic link aggregation and deaggregation.
NOTE
After member ports are added to a LAG, each end of the LAG sends LACPDUs to inform the
peer of its system priority, MAC address, member port priorities, port numbers, and operational
keys.
After being informed, the peer compares the information with that saved on itself, and selects
ports that can be aggregated. Then, LACP negotiation is performed to select active ports and
links.
Transmission of LACPDUs may be in either of the following modes:
l
Event-triggered transmission
A change in the state of the local system or in the local configuration triggers the generation
and transmission of a new LACPDU.
Periodic transmission
When a LAG works stably, LACPDUs are periodically transmitted telling the real-time
LAG status.
Issue 03 (2012-11-30)
Link
Aggreg
ation
Type
Definition
Application
Scenario
Impact on Services
Manual
link
aggrega
tion
366
10 Link Aggregation
Link
Aggreg
ation
Type
Definition
Application
Scenario
Impact on Services
Static
link
aggrega
tion
NOTE
MAC addresses, including source MAC addresses, destination MAC addresses, and source
MAC addresses plus destination MAC addresses
MPLS labels
When a member in a LAG changes or a link fails, the traffic is re-allocated automatically.
Issue 03 (2012-11-30)
When load non-sharing link aggregation is set to revertive, services are switched back to
the active link after this link is restored.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
367
10 Link Aggregation
When load non-sharing link aggregation is set to non-revertive, services are not switched
back to the active link after this link is restored, but are still transmitted on the standby link.
Definition
Master
port
Standby
port
Port Characteristics
Similarity
Difference
l When creating a
LAG, you need to
specify both the
master and standby
ports.
l The master/standby
state of a port does
not change once
being configured.
NOTE
10.2.5 Priority
LAG priority includes system priority and port priority. Priority setting allows negotiation of
aggregation information between LAGs at two ends and real-time maintenance on link status.
Issue 03 (2012-11-30)
368
10 Link Aggregation
Priority Type
Description
Usage
System priority
Port priority
NOTE
System priority and port priority work together to determine which ports are used to carry services with
preference. System priority prevails over port priority.
10.3 Specifications
This section describes the capacity of the OptiX OSN equipment for supporting LAG.
Table 10-1 provides specifications associated with LAG.
Table 10-1 Specifications associated with LAG
Item
Specifications
Support capability
l Number of LAGs
OptiX OSN 7500 II: 128
OptiX OSN 7500/3500: 64
OptiX OSN 1500: 16
l Number of members in a LAG
OptiX OSN 7500 II/7500/3500: 16
OptiX OSN 1500: 8
l Load sharing
l Load non-sharing
LAG type
l Static
l Manual
Issue 03 (2012-11-30)
369
10 Link Aggregation
Item
Specifications
1 to 30
Default value: 10
Revertive mode
l Revertive (default)
l Non-revertive
Switching time
l Manual aggregation:
The working port is faulty.
The working mode of port changed.
l Static aggregation:
The working port is faulty.
The working mode of port changed.
The priority of port changed.
The priority of system changed.
NOTE
A unidirectional fiber cut triggers a single-ended protection switching lasting not longer than 3s when the
following conditions are met: (1) The LAG is in manual aggregation mode and the ports are full-duplex;
(2) The revertive mode of the LAG is non-revertive; (3) The load sharing mode of the LAG is non-load
sharing.
The system provides intra-board port LAG protection, which complies with the IEEE 802.3ad standard.
Issue 03 (2012-11-30)
370
10 Link Aggregation
10.5 Availability
The LAG function requires the support of applicable NE versions and boards.
Version Support
Product
Applicable Version
V100R009C03
T2000
V200R007C03
Product
Applicable Version
U2000
Product
Applicable Version
U2000
Hardware Support
Board
Type
Issue 03 (2012-11-30)
Link
Aggregatio
n Type
Port
Type
Applicable
Equipment
Sup
por
t for
Ma
nua
l
Ag
gre
gati
on
Sup
port
for
Stati
c
Aggr
egati
on
Support
for Load
Sharing
Support
for Load
NonSharing
N1PEG1
6
Yes
Yes
Yes
Yes
IP
N1PEX1
Yes
Yes
Yes
Yes
IP
371
Board
Type
Issue 03 (2012-11-30)
10 Link Aggregation
Link
Aggregatio
n Type
Port
Type
Applicable
Equipment
Sup
por
t for
Ma
nua
l
Ag
gre
gati
on
Sup
port
for
Stati
c
Aggr
egati
on
Support
for Load
Sharing
Support
for Load
NonSharing
N1PETF
8
Yes
Yes
Yes
Yes
IP
R1PEFS8
Yes
Yes
Yes
Yes
IP
Q1PEGS
2
Yes
Yes
Yes
Yes
IP
R1PEGS
1
Yes
Yes
Yes
Yes
IP
R1PEF4F
Yes
Yes
Yes
Yes
IP
N1PEG8
Yes
Yes
Yes
Yes
IP
N2PEX1
Yes
Yes
Yes
Yes
IP
N1PEX2
Yes
Yes
Yes
Yes
IP
N1PEFF
8
Yes
Yes
Yes
Yes
IP
N1EDQ4
1
Yes
No
No
Yes
VCTRUN
K
TNN1EX
2
Yes
Yes
Yes
Yes
IP
TNN1EG
8
Yes
Yes
Yes
Yes
IP
TNN1ET
MC
Yes
Yes
Yes
Yes
IP
TNN1EF
F8
Yes
Yes
Yes
Yes
IP
372
10 Link Aggregation
Issue 03 (2012-11-30)
Applicable
Object
Remarks
LAG
LAG
LAG
LAG and
ETH-OAM
The N1PEX2/N2PEX1/N1PEG8/
TNN1EX2/TNN1EG8/
TNN1ETMC/TNN1EFF8 board
does not support Ethernet service
OAM (IEEE 802.1ag) and loadsharing LAG at the same time.
VCTRUNK
port on the
N1EDQ41
board
Aggregation
mode
373
10 Link Aggregation
Applicable
Object
Remarks
System
priority
Master port
Slave port
Port priority
WTR Time
Issue 03 (2012-11-30)
Applicable
Object
Remarks
LAG
374
10 Link Aggregation
Maintenance Principles
Applicable
Object
Remarks
LAG
Master port
10.7 Principles
LACP implements establishment, switchover, and switchover reversion of static LAGs.
NOTE
LACP is not involved in implementation of manual link aggregation, and therefore the devices at both ends
do not exchange link aggregation control protocol data units (LACPDUs). Manual link aggregation applies
when the OptiX OSN equipment is interconnected with equipment that does not support LACP. For manual
link aggregation, LAG establishment, switchover, and switchover reversion are implemented based on the
port status, port working mode, and port rate. For details on implementation principles of manual link
aggregation, see IEEE 802.3ad.
2.
Devices at both ends determine the Actor based on LACP system priorities and system IDs.
3.
Devices at both ends determine active ports (ports carrying traffic) based on the port
priorities of the Actor.
Issue 03 (2012-11-30)
375
10 Link Aggregation
Issue 03 (2012-11-30)
376
10 Link Aggregation
Actor
Port
priority
Port
priority
1
2
NE1
Actor
Port
priority
LAG
Active ports are
selected based
on information
at the Actor
Active Link
1
2
LAG
1
NE2
Port
priority
2
1
NE1
NE2
Switchover Conditions
In a static LAG, a link switchover is triggered if a device at one end detects one of the following
events:
l
Switchover Process
When any of the preceding trigger conditions is met, the link switchover is performed in the
following procedure:
l
2.
The backup link with the highest priority is selected to replace the faulty active link.
3.
The backup link with the highest priority becomes the active link and then forwards
data.
Issue 03 (2012-11-30)
377
2.
10 Link Aggregation
Port
priority
9
10
Port
priority
LAG
9
10
NE1
NE2
Active link
Standby link
NOTE
For a load non-sharing static LAG, if an active port fails and becomes an inactive port, it will work as an
active port again after it recovers. Traffic is re-allocated to member links based on the load sharing
algorithm.
Services are not switched back to a previously active link immediately after this link recovers,
but until the WTR time expires. The WTR time prevents possible frequent link status changes
from affecting service transmission on the entire LAG.
The method of configuring a LAG on OptiX OSN series is similar (except for board names and slots) and
therefore this document uses OptiX OSN 3500 NEs as an example. For boards applicable to a LAG, see
the heading "Availability". For valid slots for the applicable boards, see the hardware description for a
specific OptiX OSN product.
378
10 Link Aggregation
Start
Optional
Static aggregation
Manual aggregation
Sharing
Non-sharing
End
Parameter settings in Table 10-2 , excluding the system priorities, must be the same at both ends of a LAG.
Issue 03 (2012-11-30)
379
10 Link Aggregation
Description
10.8.2
Creatin
g a LAG
Required.
Required.
Optional.
This parameter can be set only after Load Sharing is set
to Sharing.
The default value Automatic is recommended.
Optional.
This parameter can be set only after Load Sharing is set
to Non-Sharing.
If you set this parameter to Revertive, services will be
switched back to the member link with a higher priority
after it recovers from faults. A LAG switchover will cause
services to be unavailable for not more than 500 ms.
Optional.
This parameter can be set only after Revertive Mode is set
to Revertive.
The default value 10min is recommended.
Optional.
Required.
Optional.
Set higher port priorities for ports that are preferred to carry
traffic.
Issue 03 (2012-11-30)
380
10 Link Aggregation
Prerequisites
l
Ports to be added as the master and standby ports in the LAG work in Layer 2 mode.
All Ethernet ports to be added in the LAG work in full-duplex mode and at the same traffic
rate.
The LAG to be created will not contain both Ethernet ports and VCTRUNK ports.
In an inter-board LAG, the master and standby ports are provided by boards of the same
type.
Procedure
Step 1 On the Main Topology, right-click the icon of the desired NE and then choose NE Explorer
from the shortcut menu.
Step 2 In the NE Explorer, select the desired NE and choose Configuration > Packet
Configuration > Interface Management > Link Aggregation Group Management from the
Function Tree.
Step 3 Click the Link Aggregation Group Management tab.
Step 4 Click New. The Create Link Aggregation Group dialog box is displayed.
Step 5 In Attribute Settings, set LAG attributes.
Issue 03 (2012-11-30)
381
10 Link Aggregation
1.
2.
3.
Step 7 Click Apply. In the Confirm dialog box that is displayed, click Yes.
NOTE
After a LAG is configured, services are manually reconfigured over member ports in the LAG. Therefore,
services are transiently interrupted.
Step 8 The Result dialog box is displayed, indicating that the operation is successful. Click Close.
----End
Prerequisites
l
A load non-sharing static LAG has been configured. The LAG has several standby ports.
Procedure
Step 1 On the Main Topology, right-click the icon of the desired NE and then choose NE Explorer
from the shortcut menu.
Issue 03 (2012-11-30)
382
10 Link Aggregation
Step 2 In the NE Explorer, select the desired NE and choose Configuration > Packet
Configuration > Interface Management > Link Aggregation Group Management from the
Function Tree.
Step 3 Click the Port Priority tab and set port priorities for the standby ports in the Port Priority
dialog box that is displayed.
Step 4 Click Apply. The Result dialog box is displayed, indicating that the operation is successful.
Click Close.
----End
Prerequisites
l
Procedure
Step 1 On the Main Topology, right-click the icon of the desired NE and then choose NE Explorer
from the shortcut menu.
Step 2 In the NE Explorer, select the desired NE and choose Configuration > Packet
Configuration > Interface Management > Link Aggregation Group Management from the
Function Tree.
Step 3 Click the Link Aggregation Group Management tab. Select one or more LAGs and click
Query. The query results should be consistent with the network plan.
Issue 03 (2012-11-30)
383
10 Link Aggregation
NOTE
l If no LAG queries have been conducted, the port statuses of all ports are Unknown.
l For an active port that is carrying traffic, the port status is Selected.
l For a backup port that does not carry traffic, the port status is Standby.
Step 4 Click the Port Priority tab. Select one or more ports and click Query. The queried port priorities
should be consistent with the network plan.
----End
Prerequisites
l
LAG configurations have been completed at both the local and opposite end.
Procedure
Step 1 On the Main Topology, right-click the icon of the desired NE and then choose NE Explorer
from the shortcut menu.
Step 2 In the NE Explorer, select the desired NE and choose Configuration > Packet
Configuration > Interface Management > Link Aggregation Group Management from the
Function Tree.
Step 3 Click the Link Aggregation Group Management tab. Right-click the desired LAG and choose
Link Aggregation Group Details or Link LACP Packet Statistics from the shortcut menu.
Step 4 In the Link Aggregation Group Details or Link LACP Packet Statistics dialog box that is
displayed, browse the query results.
Step 5 Click Cancel.
----End
384
10 Link Aggregation
Prerequisites
l
LAG configurations have been completed at both the local and opposite end.
CAUTION
Modifying LAG parameter settings interrupt services.
Procedure
Step 1 On the Main Topology, right-click the icon of the desired NE and then choose NE Explorer
from the shortcut menu.
Step 2 In the NE Explorer, select the desired NE and choose Configuration > Packet
Configuration > Interface Management > Link Aggregation Group Management from the
Function Tree.
Step 3 Change basic LAG attributes and port attributes.
1.
Click the Link Aggregation Group Management tab. Select the desired LAG and click
Modify. The Modify Link Aggregation Group dialog box is displayed.
2.
NOTE
Issue 03 (2012-11-30)
385
3.
10 Link Aggregation
NOTE
l Master Board and Master Port: These parameter values cannot be changed.
l Board and Port under Available Standby Ports: Changing these parameter values transiently
interrupts services. Services will not be restored until value changes are completed at both ends.
4.
Click Apply. The Result dialog box is displayed, indicating that the operation is successful.
Click Close.
Click the Port Priority tab. Double-click the Port Priority area of the desired port.
2.
l For a load sharing LAG, changing port priorities does not interrupt services.
l For a load non-sharing LAG:
l If you change the port priority of the working port to a value higher than those of other ports,
services are not interrupted.
l If you change the port priority of the working port to a value lower than that of another
member port, a LAG switchover occurs and services are transiently interrupted.
3.
Click Apply. The Result dialog box is displayed, indicating that the operation is successful.
Click Close.
----End
Prerequisites
l
LAG configurations have been completed at both the local and opposite end.
CAUTION
Deleting a LAG interrupts services carried on the LAG.
Issue 03 (2012-11-30)
386
10 Link Aggregation
Procedure
Step 1 On the Main Topology, right-click the icon of the desired NE and then choose NE Explorer
from the shortcut menu.
Step 2 In the NE Explorer, select the desired NE and choose Configuration > Packet
Configuration > Interface Management > Link Aggregation Group Management from the
Function Tree.
Step 3 Click the Link Aggregation Group Management tab. Select one or more LAGs and click
Delete. In the Confirm dialog box that is displayed, click OK.
Step 4 The Result dialog box is displayed, indicating that the operation is successful. Click Close.
----End
The method of configuring a LAG on OptiX OSN series is similar (except for board names and slots) and
therefore this document uses OptiX OSN 3500 NEs as an example. For boards applicable to a LAG, see
the heading "Availability". For valid slots for the applicable boards, see the hardware description for a
specific OptiX OSN product.
Service Requirements
As shown in Figure 10-7, NE1 is an OptiX OSN 3500 NE. NE1 and an RNC are interconnected
through four GE links. 4xGE services are expected to be carried between the RNC and NE1.
This means that a single GE link cannot meet the 4xGE service requirement.
Issue 03 (2012-11-30)
387
10 Link Aggregation
Figure 10-7 Networking application (LAG configured to increase Ethernet link bandwidth)
Solution
In this example, a load sharing LAG solution will apply. In this solution, four GE physical
Ethernet links are bundled as a 4xGE logical link to transmit the 4xGE services between NE1
and the RNC.
Service Planning
The service planning information contains the information about all the parameters required for
configuring the NE data.
NOTE
This section provides the service planning information about NE1 only. The service planning information
about the RNC should be consistent with that about NE1.
Issue 03 (2012-11-30)
Parameter
Value
LAG No.
LAG Name
LAG1
LAG Type
Revertive Mode
Non-revertive
Load Balancing
Sharing
LAG Priority
Master Board
1-N1PEG8
Master Port
1 (PORT-1)
Board
1-N1PEG8
388
10 Link Aggregation
Parameter
Value
N1PEG8-2 (PORT-2)
N1PEG8-3 (PORT-3)
N1PEG8-4 (PORT-4)
Port Priority
Prerequisites
l
You are familiar with the service networking and planning information.
Procedure
Step 1 Follow instructions in 10.8.2 Creating a LAG and create a LAG.
Parameter
Value
LAG No.
LAG Name
LAG1
LAG Type
Revertive Mode
Non-revertive
Load Balancing
Sharing
LAG Priority
Master Board
1-N1PEG8
Master Port
1 (PORT-1)
Board
1-N1PEG8
N1PEG8-2 (PORT-2)
N1PEG8-3 (PORT-3)
N1PEG8-4 (PORT-4)
Issue 03 (2012-11-30)
389
10 Link Aggregation
Step 2 Follow instructions in 10.8.3 Setting Port Priorities and set port priorities.
Parameter
Value
Port Priority
Step 3 Follow instructions in 10.9.1 Querying LAG Configuration and Running Information and
query LAG configuration and running information.
----End
The method of configuring a LAG on OptiX OSN series is similar (except for board names and slots) and
therefore this document uses OptiX OSN 3500 NEs as an example. For boards applicable to a LAG, see
the heading "Availability". For valid slots for the applicable boards, see the hardware description for a
specific OptiX OSN product.
Service Requirements
As shown in Figure 10-8, NE1 is an OptiX OSN 3500 NE. NE1 and an RNC are interconnected
through two GE links. 1xGE service is expected to be carried between the RNC and NE1. A
single GE link failure will not interrupt the service.
Figure 10-8 Networking application (LAG configured to provide protection for Ethernet links)
Issue 03 (2012-11-30)
390
10 Link Aggregation
Solution
In this example, a load non-sharing LAG solution will apply. In this solution, two GE links are
bundled as a LAG to provide 1+1 link backup and to ensure reliable transmission of the service
between the RNC and NE1.
NOTE
A load non-sharing LAG supported by the OptiX OSN equipment provides only 1+1 link backup.
Service Planning
The service planning information contains the information about all the parameters required for
configuring the NE data.
NOTE
This section provides the service planning information about NE1 only. The service planning information
about the RNC should be consistent with that about NE1.
Parameter
Value
LAG No.
LAG Name
LAG1
LAG Type
Revertive Mode
Load Balancing
Non-Sharing
LAG Priority
10 (default value)
Master Board
1-N1PEG8
Master Port
1 (PORT-1)
Board
1-N1PEG8
N1PEG8-2 (PORT-2)
Prerequisites
l
You are familiar with the service networking and planning information.
Issue 03 (2012-11-30)
391
10 Link Aggregation
Procedure
Step 1 Follow instructions in 10.8.2 Creating a LAG and create a LAG.
Parameter
Value
LAG No.
LAG Name
LAG1
LAG Type
Revertive Mode
Load Balancing
Non-Sharing
LAG Priority
10 (default value)
Master Board
1-N1PEG8
Master Port
1 (PORT-1)
Board
1-N1PEG8
N1PEG8-2 (PORT-2)
Step 2 Follow instructions in 10.9.1 Querying LAG Configuration and Running Information and
query LAG configuration and running information.
----End
Prerequisites
The LAG must be configured.
Background Information
NOTE
This topic considers the TDM mode as an example to describe how to verify a LAG and provides a reference
for you to verify a LAG in packet mode. The difference between LAG verification in the two modes is
with regard to the navigation path to the Link Aggregation Group Management tab page.
l In TDM mode: In the NE Explorer, select the required Ethernet board and choose Configuration >
Interface Management > Link Aggregation Group Management from the Function Tree.
l In packet mode: In the NE Explorer, select the required NE and choose Configuration > Interface
Management > Link Aggregation Group Management from the Function Tree.
Figure 10-9 shows the networking of a LAG, in which the port that is connected to link 1
functions as the main port and the port that is connected to link 2 functions as the slave port.
You can verify the LAG in the following aspects:
Issue 03 (2012-11-30)
392
10 Link Aggregation
If the LAG is in load non-sharing mode, the LAG works properly after the parameters for
the LAG are changed.
If the LAG is in load non-sharing mode and is set to revertive, the service is switched back
to the main port after the fault on the link connected to the main port is rectified.
NE2
LAG
Link 1
Link 2
Procedure
Step 1 After the network is faulty, the LAG enables service switching.
1.
2.
In the Main Topology, right-click NE1 and choose NE Explorer from the shortcut menu.
3.
In the NE Explorer, select the required Ethernet board and choose Configuration >
Interface Management > Link Aggregation Group Management from the Function
Tree.
4.
In the right pane, select the configured LAG and then click Query. Check whether the main
and slave ports work properly.
Step 2 If the LAG is in load non-sharing mode, the LAG works properly after the parameters for the
LAG are changed.
1.
In the Main Topology, right-click NE1 and choose NE Explorer from the shortcut menu.
2.
In the NE Explorer, select the required Ethernet board and choose Configuration >
Interface Management > Link Aggregation Group Management from the Function
Tree.
3.
In the right pane, select the configured LAG and then click Query. Check whether the main
and slave ports work properly.
4.
Select the configured LAG and then click Modify. Change Load Sharing to Sharing for
NE1. Then, check whether the states of the main and slave ports are In Service.
5.
Click Apply. The Operation Result dialog box is displayed, indicating that the operation
is successful.
6.
7.
Repeat Steps a to c to query the states of the main and slave ports in the LAGs on NE1 and
NE2. The states of the main and slave ports are In Service.
Step 3 If the LAG is in load non-sharing mode and is set to revertive, the service is switched back to
the main port after the fault on the link connected to the main port is rectified.
1.
Issue 03 (2012-11-30)
In the Main Topology, right-click NE1 and choose NE Explorer from the shortcut menu.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
393
10 Link Aggregation
2.
In the NE Explorer, select the required Ethernet board and choose Configuration >
Interface Management > Link Aggregation Group Management from the Function
Tree.
3.
In the right pane, select the configured LAG and then click Query. Check whether the main
and slave ports work properly.
4.
5.
Repeat steps a to c to query the states of the main and slave ports in the LAGs on NE1 and
NE2.
6.
7.
Repeat steps a to c to query the states of the main and slave ports in the LAGs on NE1 and
NE2.
NOTE
If the main and slave ports in the LAGs on NE1 and NE2 are restored to the initial states, it indicates
that the LAGs work properly.
----End
10.13 Troubleshooting
Following a certain troubleshooting flow facilitates your fault locating and rectification.
Symptom
The common LAG faults are as follows:
l
The member ports become unavailable, because the LAG is faulty. As a result, the services
are interrupted.
Certain packets are discarded, because the member ports in the LAG become unavailable.
Possible Causes
l
Flow Chart
Following a certain troubleshooting flow facilitates your fault locating and rectification.
NOTE
This topic describes the process for rectifying a LAG fault when the LAG function is enabled in TDM
mode. This topic also provides a reference for you to rectify a LAG fault because the process for rectifying
a LAG fault when the LAG function is enabled in packet mode is similar. The differences between the
troubleshooting processes in two modes are as follows:
l The navigation path to the Link Aggregation Group Management tab page is different.
l The alarms that cause a LAG failure in the two modes are different. In packet mode, the LAG_DOWN
or LAG_MEMBER_DOWN alarm causes a LAG failure.
Issue 03 (2012-11-30)
394
10 Link Aggregation
Start
The
LAG_FAIL
alarm occurs?
Yes
Handle the
alarm
No
The
configurations of
both NEs at both
ends are
incorrect?
Yes
Modify the
configurations of
both NEs
Yes
No
Yes
Release the
loopback at the port
No
The ETH_LOS or
ETH_LINK_DOWN
alarm occurs?
Yes
No
The fault is
rectified?
Yes
Contact Huawei
technical support
engineers to handle the
fault
Issue 03 (2012-11-30)
End
395
10 Link Aggregation
Procedure
Step 1 Cause 1: The NEs at both ends of the LAG are configured incorrectly.
1.
2.
If yes, see the Alarms and Performance Events Reference to clear the alarm. Then, check
whether the fault is rectified.
Step 2 Cause 2: The member ports in the LAG work in half-duplex mode.
1.
Check whether the member ports in the LAG work in half-duplex mode.
2.
If yes, change the working mode to full-duplex and ensure that the ports on the local and
opposite NEs work in the same mode.
Step 3 Cause 3: Loopbacks are performed on the member ports in the LAG.
1.
2.
2.
If yes, see the Alarms and Performance Events Reference to clear the alarm. Then, check
whether the fault is rectified.
Step 5 If the fault persists, contact the technical support engineers of Huawei.
----End
Issue 03 (2012-11-30)
Name
Alarm Description
LAG_DOWN
Indicates that a LAG fails. This alarm is reported when the number of
activated members in a LAG is 0.
LAG_MEMBER
_DOWN
Indicates that a member port in a LAG fails. This alarm is reported when
a member port cannot be activated or cannot work as a backup port.
396
10 Link Aggregation
Value Range
Description
LAG No.
OptiX OSN
7500 II: 1-128
OptiX OSN
7500/3500:
1-64
OptiX OSN
1500: 1-16
For example,
1
LAG Name
A character
string with
less than 64
characters,
which can be
letters,
numerals, and
special
characters.
For example,
LAG1
Issue 03 (2012-11-30)
397
10 Link Aggregation
Parameter
Value Range
Description
LAG Type
Static, Manual
Default value:
Static
Switch
Protocol
Switching
Mode
Link
Detection
Protocol
Issue 03 (2012-11-30)
398
10 Link Aggregation
Parameter
Value Range
Description
Revertive
Mode
Revertive,
Non-revertive
Default value:
NonRevertive
Issue 03 (2012-11-30)
399
10 Link Aggregation
Parameter
Value Range
Description
Load
Balancing
Non-Sharing,
Sharing
Default value:
Non-Sharing
Issue 03 (2012-11-30)
400
10 Link Aggregation
Parameter
Value Range
Description
Load
Balancing
Hash
Algorithm
Automatic,
Source MAC,
Destination
MAC, Source
and
Destination
MACs,
Source IP,
Destination IP
Address,
Source and
Destination IP
Address,
Source Port
Number,
Destination
Port Number,
Source and
Destination
Port Numbers,
MPLS Label
Default value:
Automatic
Issue 03 (2012-11-30)
401
Parameter
Value Range
10 Link Aggregation
Description
Ensure that both ends of a LAG use the same hash algorithm.
Association with other parameters:
This parameter can be set only after Load Sharing is set to
Sharing.
LAG
Priority
0 to 65535
Default:
32768
WTR Time
(min)
1 to 30
Default: 10
Issue 03 (2012-11-30)
Parameter
Value Range
Description
Master
Board
Slot ID-board
name
For example,
4-N1PEG8
Recommendations:
402
10 Link Aggregation
Parameter
Value Range
Description
Master Port
Port number
(PORTnumber)
For example,
1 (PORT-1)
Recommendations:
Among all the ports to be added to a LAG, only one port is
allowed to have been configured with services and this port
must be configured as the master port; if none of these ports
is configured with services, each port can be configured as
the master port.
Set this parameter according to the plan.
Board
Port
Selected
Standby
Ports
Slot ID-board
name
For example,
4-N1PEG8
Slot ID-board
name-port
number
(PORTnumber)
For example,
4-N1PEG8-2
(PORT-2)
Slot ID-board
name-port
number
(PORTnumber)
Recommendations:
For example,
4-N1PEG8-2
(PORT-2)
Issue 03 (2012-11-30)
403
10 Link Aggregation
Value Range
Description
Port
Slot ID-board
name-port
number
(PORTnumber)
For example,
4-N1PEG8-2
(PORT-2)
Port Priority
0 to 65535
Default:
32768
Value Range
Description
LAG No.
1 to 128
For example,
1
Local
System ID
LAG priority,
system MAC
address
For example,
32768,
8-0-62-9-0-8
Issue 03 (2012-11-30)
404
10 Link Aggregation
Parameter
Value Range
Description
Local Port
Slot ID-board
name-port
number
(PORTnumber)
For example,
4-N1PEG8-2
(PORT-2)
Local
Logical Port
Number
For example,
3586
Local Port
Priority
For example,
32768
Local Port
Selection
Status
Selected,
Standby,
Unselected
Local Port
Service
Bearer
YES, NO
Reference
Port
YES, NO
Local
Operation
KEY
For example,
257
Local LACP
Protocol
Flag
For example,
71
This parameter displays the LACP flag used at the local end.
Opposite
System ID
LAG priority,
system MAC
address
For example,
32768,
8-0-62-9-0-8
Issue 03 (2012-11-30)
Opposite
Logical Port
Number
For example,
65535
Opposite
Port Priority
For example,
32768
Opposite
Operation
KEY
For example,
256
405
Issue 03 (2012-11-30)
10 Link Aggregation
Parameter
Value Range
Description
Opposite
LACP
Protocol
Flag
For example,
72
406
11 LPT
11
LPT
407
11 LPT
This section uses an example to describe how to configure LPT based on an E-Line service
exclusively occupying a UNI port.
11.11 Configuration Example (LPT Based on Ethernet Services Sharing a UNI Port)
This section uses an example to describe how to configure LPT based on Ethernet services
sharing a UNI port.
11.12 Verifying LPT
To check whether the LPT is configured successfully, simulate the fault on the link where the
access node is involved and query the reported alarm.
11.13 Troubleshooting
To locate and rectify the common faults in a timely manner, you need to get familiar with the
fault symptoms and the troubleshooting flow. This topic describes how to handle common LPT
faults by using point-to-multipoint LPT as an example.
11.14 Relevant Alarms and Performance Events
This section describes the alarms and performance events that are associated with the LPT
feature.
11.15 Relevant Parameters
This section describes parameters for configuring point-to-point LPT and for configuring pointto-multipoint LPT.
Issue 03 (2012-11-30)
408
11 LPT
11.1 Introduction
With LPT enabled, transmission NEs detect faults on a service access node or on a service
network and instruct the devices at both ends of the service network to immediately activate a
backup network for communication.
Existing and conventional network protection schemes function when faults are found on a
service access node or on a service network.
With LPT enabled, it will instruct the devices at two ends of the service network to immediately
activate the backup network for service transmission once detecting faults on a service access
point or on a service network.
As shown in Figure 11-1, LPT-enabled NE1 and NE2 disconnect their access links from the
NodeB and the RNC if the link between NE1 and the NodeB, the link between NE2 and the
RNC, or the service network becomes faulty. The NodeB and the RNC immediately detect the
link failure between them, and switch to a backup network for communication.
Figure 11-1 Typical application of LPT
LPT OAM
The LPT protocol components exchange LPT protocol packets between LPT NEs on a
service network and determine the status of the LPT links between the NEs based on the
negotiation result.
Issue 03 (2012-11-30)
409
11 LPT
PW OAM
The LPT protocol components exchange LPT protocol packets between LPT NEs on a
service network and determine the status of the LPT links between the NEs based on the
negotiation result and the statuses of PWs carried on the LPT links. The LPT protocol
components consider that LPT links are working properly only when negotiation using LPT
packets is successful and PWs are functional. If negotiation using LPT packets fails or PWs
are faulty, the LPT protocol components consider that the LPT links are faulty.
Primary point
The primary point is the entity running the LPT protocol. The LPT protocol components
determine the LPT status based on the running status of the primary point.
Secondary point
A secondary point detects and transfers changes in the LPT port status, such as changes in
the local port status and changes in the remote point status.
NOTE
In point-to-point LPT configuration, there is one primary point and one secondary point. In point-tomultipoint LPT configuration, there are one primary point and several secondary points, which correspond
to the root node and leaf nodes in a networking topology.
Switching Modes
l
Strict mode
A primary point triggers LPT switching when all of its secondary points detect faults.
Non-strict mode
A primary point triggers LPT switching when one of its secondary points detects a fault.
NOTE
In point-to-point LPT configuration, there is only one secondary point and therefore the switching mode
does not make any difference.
11.3 Specifications
This section describes the specifications of the LPT feature.
Table 11-1 Specifications of the LPT feature
Item
Specifications
Number of NEs
supported by LPT
Switching mode
l Strict mode
l Non-strict mode
NOTE
The LPT switching mode is available only to point-to-multipoint LPT.
Issue 03 (2012-11-30)
410
11 LPT
Item
Specifications
Fault detection
mechanism
LPT OAM
PW OAM
Switching condition
l ETH_LOS
NNI side:
l ETH_LINK_DOWN
l dServer
l LSR_NO_FITED
l dLOCV
l dTTSI_Mismatch
l dTTSI_Mismerge
l dExcess
UNI side:
l ETH_LOS
l ETH_LINK_DOWN
l LSR_NO_FITED
Switching time
11.5 Availability
The LPT feature requires the support of the applicable NEs, NMSs, and boards.
Version Support
Issue 03 (2012-11-30)
Product
Applicable Version
V100R009C03
T2000
V200R007C03
Product
Applicable Version
U2000
411
11 LPT
Product
Applicable Version
U2000
Hardware Support
Board Type
Applicable NE Version
Applicable NE
N1PEG16
N1PEX1
N1PETF8
R1PEFS8
Q1PEGS2
R1PEGS1
N1PEG8
N2PEX1
N1PEX2
N1PEFF8
R1PEF4F
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
Issue 03 (2012-11-30)
Applicable
Object
Remarks
LPT
412
11 LPT
Applicable
Object
Remarks
LPT
LPT
Remarks
LPT
LPT
Point-to-point
LPT
Applicable
Object
Remarks
LPT
Tunnel APS
and LPT
Maintenance Principles
11.7 Principles
This section describes LPT working principles.
Issue 03 (2012-11-30)
413
11 LPT
The LPT state transition process is the same regardless of whether LPT uses LPT OAM or PW OAM to
detect faults.
In the LPT state transition process, common LPT faulty states are as follows:
l
Initial state: The LPT protocol components have not received LPT protocol packets.
Access-side faulty state: A port fault or link fault occurs on the local service access side.
NOTE
l If an LPT-enabled UNI port is configured in a LAG, LPT switching is triggered only when all
ports or links in the LAG are faulty.
l If an LPT-enabled UNI port is not configured in a LAG, LPT switching is triggered when the
port or its link is faulty.
Network-side faulty state: A local NNI port or link is faulty or a peer UNI port or link is
faulty.
LPT changes from the initial state to a normal state once the LPT protocol components
receive LPT protocol packets.
2.
LPT changes from a normal state to the initial state once the LPT protocol components fail
to receive LPT protocol packets.
Issue 03 (2012-11-30)
414
11 LPT
3.
LPT changes from a normal state to the network-side faulty state once the LPT protocol
components detect network-side faults.
4.
LPT changes from the network-side faulty state to the wait-to-restore (WTR) state once
the LPT protocol components detect that the network-side faults have been rectified.
5.
LPT changes from the WTR state to the network-side faulty state once the LPT protocol
components detect network-side faults.
6.
LPT changes from the WTR state to the fault recovery state once the LPT protocol
components detect that access-side faults have been rectified.
7.
LPT changes from the fault recovery state to a normal state once the LPT protocol
components detect that all access-side and network-side faults have been rectified.
8.
LPT changes from the fault recovery state to the network-side faulty state once the LPT
protocol components detect network-side faults.
9.
LPT changes from the fault recovery state to the access-side faulty state once the LPT
protocol components detect access-side faults.
10. LPT changes from the access-side faulty state to the WTR state once the LPT protocol
components detect that the access-side faults have been rectified.
11. LPT changes from the WTR state to the access-side faulty state once the LPT protocol
components detect access-side faults.
12. LPT changes from the access-side faulty state to the network-side faulty state once the LPT
protocol components detect network-side faults.
13. LPT changes from a normal state to the access-side faulty state once the LPT protocol
components detect access-side faults.
14. LPT changes from the access-side faulty state to the initial state once the LPT protocol
components do not receive LPT protocol packets.
15. LPT changes from the initial state to the access-side faulty state once the LPT protocol
components receive LPT protocol packets and detect access-side faults.
Fault detection
Reports the link
fault alarm.
Access
node A
Service network
Service
equipment A
Broken
Access
node B
1.
When detecting that the link connected to access node A is faulty, service equipment A
transmits Broken packets to service equipment B.
2.
After receiving the Broken packets, service equipment B disconnects its link from access
node B and reports the LPT_CFG_CLOSEPORT alarm.
Fault recovery
Issue 03 (2012-11-30)
415
11 LPT
Service network
Non_Broken
Service
equipment A
Service
equipment B
Access
node B
1.
After detecting that the link connecting to access node A has recovered, service equipment
A stops reporting the link fault alarm and transmits Non_Broken packets to service
equipment B.
2.
After receiving the Non_Broken packets, service equipment B restores its link to access
node B.
Fault detection
Service equipment A and service equipment B can detect whether the service network is
available. If the service network is unavailable, service equipment A and service equipment B
transmit Broken packets to each other. Service equipment A and service equipment B disconnect
their links from access node A and access node B and reports the LPT_CFG_CLOSEPORT
alarm.
NOTE
If the service network is unavailable only in one direction, only the equipment at one end (for example,
service equipment A) can detect the service network fault and immediately disconnect its link from access
node A. At the same time, service equipment A transmits Broken packets to service equipment B. After
receiving the Broken packets, service equipment B disconnects its link from access node B and reports the
LPT_CFG_CLOSEPORT alarm.
Fault recovery
1.
When detecting that the service network has recovered, service equipment A and service
equipment B transmit Non-Broken packets to each other for notifying link recovery. In
addition, service equipment A and service equipment B restore their links to access node
A and access node B.
2.
After service equipment A and service equipment B receive Non_Broken packets from
each other or after waiting for Broken packets is timeout, service equipment A and service
Issue 03 (2012-11-30)
416
11 LPT
equipment B determine that the entire LPT link is functional and then activate their links
between each other.
Fault detection
Convergence side
Access side
Network side
Broken
Service
equipment A
Service
equipment B
Service network
Broken
Access
node B
Service
equipment C
Access
node C
1.
When detecting that the link connected to access node A is faulty, service equipment A
transmits Broken packets to service equipment B and service equipment C.
2.
After receiving the Broken packets, service equipment B and service equipment C
disconnect their links from access node B and access node C and report
LPT_CFG_CLOSEPORT alarms.
Fault recovery
Convergence side
Access side
Network side
Non_Broken
Service
equipment A
Service
equipment B
Access
node B
Service network
Non_Broken
Service
equipment C
Access
node C
1.
After detecting that the link connecting to access node A has recovered, service equipment
A stops reporting the link fault alarm and transmits Non_Broken packets to service
equipment B and service equipment C.
2.
After receiving the Non_Broken packets, service equipment B and service equipment C
restore their links to access node B and access node C and stop reporting
LPT_CFG_CLOSEPORT alarms.
Issue 03 (2012-11-30)
417
11 LPT
Fault detection
Convergence side
Access side
Network side
Broken
Service
equipment A
Service
equipment B
Service network
Access
node B
Disconnects the link from access
node C and reports the
LPT_CFG_CLOSEPORT alarm.
Service
equipment C
Broken
Access
node C
1.
When detecting that the link connected to access node B is faulty, service equipment B
transmits Broken packets to service equipment A.
2.
After receiving the Broken packets, service equipment A disconnects its link from access
node A, reports the LPT_CFG_CLOSEPORT alarm, and sends Broken packets to service
equipment C.
3.
After receiving the Broken packets, service equipment C disconnects its link from access
node C and reports the LPT_CFG_CLOSEPORT alarm.
Fault recovery
Convergence side
Access side
Network side
Service
equipment B
Non_Broken
Service
equipment A
Non_Broken
Service
network
Access
node B
Restores the link to
access node C.
Service
equipment C
Access
node C
1.
After detecting that the link connecting to access node B has recovered, service equipment
B stops reporting the link fault alarm and transmits Non_Broken packets to service
equipment A.
2.
After receiving the Non_Broken packets, service equipment A restores its link from access
node A, stops reporting the LPT_CFG_CLOSEPORT alarm and transmits Non_Broken
packets to service equipment C.
3.
After receiving the Non-Broken packets, service equipment C restores its link to access
node C and stops reporting the LPT_CFG_CLOSEPORT alarm.
Issue 03 (2012-11-30)
418
11 LPT
Fault detection
Convergence side
Access side
Network side
Service
equipment B
Broken
Service
equipment A
Service
network
Access
node B
Broken
Access
node C
1.
When detecting that the link connected to access node B is faulty, service equipment B
reports the link fault alarm and transmits Broken packets to service equipment A.
2.
When detecting that the link connected to access node C is faulty, service equipment C
reports the link fault alarm and transmits Broken packets to service equipment A.
3.
After receiving Broken packets from all other service equipment, service equipment A
disconnects its link from access node A and reports the LPT_CFG_CLOSEPORT alarm.
Fault recovery
Convergence side
Access side
Network side
Service
equipment C
Access
node C
1.
After detecting that the link connecting to access node B has recovered, service equipment
B stops reporting the link fault alarm and transmits Non_Broken packets to service
equipment A.
2.
After detecting that the link connecting to access node C has recovered, service equipment
C stops reporting the link fault alarm and transmits Non_Broken packets to service
equipment A.
3.
After receiving Non-Broken packets from all other service equipment, service equipment
A restores its link to access node A and stops reporting the LPT_CFG_CLOSEPORT alarm.
Fault detection
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
419
11 LPT
1.
A bidirectional fault occurs on the service network between service equipment A and
service equipment B. As a result, service equipment A and service equipment B disconnect
their links from access node A and access node B and report LPT_CFG_CLOSEPORT
alarms. In addition, service equipment A transmits Broken packets to service equipment
C.
2.
After receiving the Broken packets, service equipment C disconnects its link from access
node C and reports the LPT_CFG_CLOSEPORT alarm.
NOTE
If the service network is unavailable only in one direction, only the equipment at one end (for example,
service equipment A) can detect the service network fault and immediately disconnect its link from access
node A. At the same time, service equipment A transmits Broken packets to service equipment B. After
receiving the Broken packets, service equipment B disconnects its link from access node B and reports the
LPT_CFG_CLOSEPORT alarm.
Fault recovery
1.
After detecting that the bidirectional fault on the service network has been rectified, service
equipment A and service equipment B restore their links to access node A and access node
B and stop reporting LPT_CFG_CLOSEPORT alarms. In addition, service equipment A
transmits Non-Broken packets to service equipment C.
2.
After receiving the Non-Broken packets, service equipment C restores its link to access
node C and stops reporting the LPT_CFG_CLOSEPORT alarm.
Fault detection
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
420
11 LPT
1.
A bidirectional fault occurs on the service network between service equipment A and
service equipment B. As a result, service equipment B disconnects its link from access node
B and reports the LPT_CFG_CLOSEPORT alarm.
2.
A bidirectional fault occurs on the service network between service equipment A and
service equipment C. As a result, service equipment C disconnects its link from access node
C and reports the LPT_CFG_CLOSEPORT alarm.
3.
After receiving Broken packets from all other service equipment, service equipment A
disconnects its link from access node A and reports the LPT_CFG_CLOSEPORT alarm.
NOTE
If the service network is unavailable only in one direction, only the equipment at one end (for example,
service equipment A) can detect the service network fault and immediately disconnect its link from access
node A. At the same time, service equipment A transmits Broken packets to service equipment B. After
receiving the Broken packets, service equipment B disconnects its link from access node B and reports the
LPT_CFG_CLOSEPORT alarm.
Fault recovery
1.
After the bidirectional fault on the service network between service equipment A and
service equipment B is rectified, service equipment B restores its link to access node B and
stops reporting the LPT_CFG_CLOSEPORT alarm.
2.
After the bidirectional fault on the service network between service equipment A and
service equipment C is rectified, service equipment C restores its link to access node C and
stops reporting the LPT_CFG_CLOSEPORT alarm.
3.
After receiving Non-Broken packets from all other service equipment, service equipment
A restores its link to access node A and stops reporting the LPT_CFG_CLOSEPORT alarm.
Issue 03 (2012-11-30)
421
11 LPT
The method of configuring a LAG on OptiX OSN series is similar (except for board names and slots) and
therefore this document uses OptiX OSN 3500 NEs as an example. For boards applicable to a LAG, see
the heading "Availability". For valid slots for the applicable boards, see the hardware description for a
specific OptiX OSN product.
Issue 03 (2012-11-30)
422
11 LPT
Issue 03 (2012-11-30)
423
11 LPT
Table 11-2
Operation
11.8.2
Configuri
ng Pointto-Point
LPT
Description
Configure the
point-to-point
LPT detection
mechanism.
Required.
l The LPT OAM detection mechanism uses LPT protocol
packets to detect faults at a minimum interval of 1s.
l The PW OAM detection mechanism uses PW OAM
packets to detect faults at a minimum interval of 3.3 ms.
When using the minimum detection period of 3.3 ms, the PW
OAM detection mechanism supports faster LPT switching
but consumes more system resources than the LPT OAM
detection mechanism. Usually, the LPT OAM detection
mechanism (default value) is used. When the LPT OAM
detection mechanism cannot support a desired LPT
switching speed, the PW OAM detection mechanism is used.
NOTE
Service exclusively occupying a UNI port: One UNI port carries only
one service and the service is configured with no service VLAN ID
or VLAN ID configured for the port DCN function.
If a PW-carried E-Line service exclusively occupying a UNI port
has been configured, configure point-to-point LPT.
11.8.3
Configuri
ng PointtoMultipoin
t LPT
Configure the
primary and
secondary LPT
points.
Required.
If Primary Function Point Type is set to UNI, Secondary
Function Point Type can be set only to PW. If Primary
Function Point Type is set to PW, Secondary Function
Point Type can be set only to UNI.
NOTE
UNI-shared service: One UNI port carries one or more services and
services are differentiated by VLAN IDs.
If PW-carried Ethernet services sharing a UNI port have been
configured, configure point-to-multipoint LPT.
Configure the
switching mode
of point-tomultipoint LPT.
Issue 03 (2012-11-30)
Required.
If there are many secondary LPT points and these points carry
unimportant services, the strict LPT switching mode is
recommended to prevent a single point fault from triggering
switchover of the entire network. If there are only a few
secondary LPT points and these points carry important
services, the non-strict LPT switching mode is
recommended.
424
Operation
11 LPT
Description
Configure the
point-tomultipoint LPT
detection
mechanism.
Required.
l The LPT OAM detection mechanism uses LPT protocol
packets to detect faults at a minimum interval of 1s.
l The PW OAM detection mechanism uses PW OAM
packets to detect faults at a minimum interval of 3.3 ms.
When using the minimum detection period of 3.3 ms, the PW
OAM detection mechanism supports faster LPT switching
but consumes more system resources than the LPT OAM
detection mechanism. Usually, the LPT OAM detection
mechanism (default value) is used. When the LPT OAM
detection mechanism cannot support a desired LPT
switching speed, the PW OAM detection mechanism is used.
Prerequisites
l
A PW-carried E-Line service exclusively occupying a UNI port has been created.
Procedure
Step 1 On the Main Topology, right-click the desired NE icon and choose NE Explorer from the
shortcut menu.
Step 2 In NE Explorer, select the desired NE from the Object Tree and choose Configuration > Packet
Configuration > LPT Management from the Function Tree.
Step 3 Click the Point-to-Point LPT tab.
Step 4 Select the desired E-Line service and click Bind.
Issue 03 (2012-11-30)
425
11 LPT
Prerequisites
l
PW-carried Ethernet services which each occupy partial timeslots of UNI ports have been
created.
Procedure
Step 1 On the Main Topology, right-click the desired NE icon and choose NE Explorer from the
shortcut menu.
Step 2 In NE Explorer, select the desired NE from the Object Tree and choose Configuration > Packet
Configuration > LPT Management from the Function Tree.
Step 3 Click the Point-to-Multipoint LPT tab.
Step 4 Click New. In the Create LPT dialog box that is displayed, set the desired parameters.
Issue 03 (2012-11-30)
426
11 LPT
Prerequisites
l
Procedure
Step 1 On the Main Topology, right-click the desired NE icon and choose NE Explorer from the
shortcut menu.
Step 2 In NE Explorer, select the desired NE from the Object Tree and choose Configuration > Packet
Configuration > LPT Management from the Function Tree.
Step 3 Click the Point-to-Point LPT tab.
Step 4 Click Query to learn whether point-to-point LPT is bound to an E-Line service.
----End
Prerequisites
l
Procedure
Step 1 On the Main Topology, right-click the desired NE icon and choose NE Explorer from the
shortcut menu.
Step 2 In NE Explorer, select the desired NE from the Object Tree and choose Configuration > Packet
Configuration > LPT Management from the Function Tree.
Step 3 Click the Point-to-Point LPT tab.
Issue 03 (2012-11-30)
427
11 LPT
Step 4 Select the desired point-to-point LPT and set LPT Enabled to Disabled.
Step 5 Change point-to-point LPT attributes.
Prerequisites
l
Procedure
Step 1 On the Main Topology, right-click the desired NE icon and choose NE Explorer from the
shortcut menu.
Step 2 In NE Explorer, select the desired NE from the Object Tree and choose Configuration > Packet
Configuration > LPT Management from the Function Tree.
Step 3 Click the Point-to-Point LPT tab.
Step 4 Select the desired point-to-point LPT and click Delete.
----End
Prerequisites
l
Procedure
Step 1 On the Main Topology, right-click the desired NE icon and choose NE Explorer from the
shortcut menu.
Step 2 In NE Explorer, select the desired NE from the Object Tree and choose Configuration > Packet
Configuration > LPT Management from the Function Tree.
Step 3 Click the Point-to-Multipoint LPT tab.
Step 4 Click Query to learn whether point-to-multipoint LPT is bound to Ethernet services.
Issue 03 (2012-11-30)
428
11 LPT
----End
Prerequisites
l
Procedure
Step 1 On the Main Topology, right-click the desired NE icon and choose NE Explorer from the
shortcut menu.
Step 2 In NE Explorer, select the desired NE from the Object Tree and choose Configuration > Packet
Configuration > LPT Management from the Function Tree.
Step 3 Click the Point-to-Multipoint LPT tab.
Step 4 Select the desired point-to-multipoint LPT and click Modify. In the dialog box that is displayed,
change primary and secondary points.
Issue 03 (2012-11-30)
429
11 LPT
NOTE
Prerequisites
l
Procedure
Step 1 On the Main Topology, right-click the desired NE icon and choose NE Explorer from the
shortcut menu.
Step 2 In NE Explorer, select the desired NE from the Object Tree and choose Configuration > Packet
Configuration > LPT Management from the Function Tree.
Step 3 Click the Point-to-Multipoint LPT tab.
Step 4 Select the desired point-to-multipoint LPT and set LPT Enabled to Disabled.
Issue 03 (2012-11-30)
430
11 LPT
NOTE
l Recovery Time (s): Changing this parameter value does not interrupt services.
l Hold-Off Time (ms): Changing this parameter value does not interrupt services.
l Switching Mode: Changing this parameter value does not interrupt services.
l Fault Detection Mode: Changing this parameter value transiently interrupts services. Services will
not be restored until value changes are completed at both ends.
l Fault Detection Period (100 ms): Changing this parameter value does not interrupt services.
Prerequisites
l
Procedure
Step 1 On the Main Topology, right-click the desired NE icon and choose NE Explorer from the
shortcut menu.
Step 2 In NE Explorer, select the desired NE from the Object Tree and choose Configuration > Packet
Configuration > LPT Management from the Function Tree.
Step 3 Click the Point-to-Multipoint LPT tab.
Step 4 Select the desired point-to-multipoint LPT and click Delete.
----End
The method of configuring a LAG on OptiX OSN series is similar (except for board names and slots) and
therefore this document uses OptiX OSN 3500 NEs as an example. For boards applicable to a LAG, see
the heading "Availability". For valid slots for the applicable boards, see the hardware description for a
specific OptiX OSN product.
Issue 03 (2012-11-30)
431
11 LPT
Service Requirements
As shown in Figure 11-4, an E-Line service exclusively occupying a UNI port is configured
between NE1 and NE2. This service needs to be protected against the following faults:
l
Faults on the access side (the link between the NodeB and NE1 and the link between the
RNC and NE2)
Faults on the network side (the service network between NE1 and NE2)
Figure 11-4 Network diagram for LPT based on an E-Line service exclusively occupying a UNI
port
Backup
Network
PSN
PW
NodeB
NE1
NE2
RNC
Solution
Link aggregation group (LAG) can provide protection against faults on the access side. MPLS
tunnel APS can provide protection against faults on the network side. Neither LAG nor MPLS
tunnel APS, however, can provide protection against faults on both the access side and the
network side. To protect services against faults on both the access side and the network side,
you can configure LPT based on an E-Line service exclusively occupying a UNI port.
Service Planning
The LPT parameters for NE1 and NE2 are planned as follows.
Issue 03 (2012-11-30)
432
11 LPT
Parameter
Value
1 (default value)
LPT OAM
10 (default value)
LPT Enabled
Enabled
Prerequisites
l
You are familiar with the service requirements and planning information.
Procedure
Step 1 Follow instructions in 11.8.2 Configuring Point-to-Point LPT to configure point-to-point LPT
on NE1 and NE2.
Parameter
Value
1 (default value)
LPT OAM
10 (default value)
LPT Enabled
Enabled
Step 2 Follow instructions in 11.9.1 Querying Point-to-Point LPT Configurations to query the pointto-point LPT configurations on NE1 and NE2.
----End
433
11 LPT
NOTE
The method of configuring a LAG on OptiX OSN series is similar (except for board names and slots) and
therefore this document uses OptiX OSN 3500 NEs as an example. For boards applicable to a LAG, see
the heading "Availability". For valid slots for the applicable boards, see the hardware description for a
specific OptiX OSN product.
Service Requirements
As shown in Figure 11-5, an Ethernet service sharing a UNI port is configured between NodeB
1 and the RNC and another is configured between NodeB 2 and the RNC. The services need to
be protected against the following faults:
l
Faults on the access side (the link between NodeB1 and NE1, the link between NodeB2
and NE2, and the link between the RNC and NE3)
Faults on the network side (the service network between NE1 and NE3 and the service
network between NE2 and NE3)
Figure 11-5 Network diagram for LPT based on Ethernet services sharing a UNI port
Backup Network
VLAN:
100
NodeB1
PSN
NE1
NodeB2
PW
VLAN:
100,200
VLAN:200
NE3
RNC
NE2
Solution
Link aggregation group (LAG) can be configured to provide protection against faults on the
access side. MPLS tunnel APS can be configured to provide protection against faults on the
Issue 03 (2012-11-30)
434
11 LPT
network side. Neither LAG nor MPLS tunnel APS, however, can provide protection against
faults on both the access side and the network side. To protect services against faults on both
the access side and the network side, you can configure LPT based on Ethernet services sharing
a UNI port.
Service Planning
The LPT parameters for NE1, NE2, and NE3 are planned as follows.
Parameter
NE1
NE2
NE3
Primary
Function Point
Point Type
UNI
UNI
UNI
Port
21-PETF8-1
21-PETF8-1
2-PEG8-1
Second
Function Point 1
Point Type
PW
PW
PW
Port
PW ID: 101
PW ID: 102
PW ID: 101
Second
Function Point 2
Point Type
PW
Port
PW ID: 102
1 (default value)
Switching Mode
Strict Mode
LPT OAM
10 (default value)
LPT Enabled
Enabled
Prerequisites
l
You are familiar with the service requirements and planning information.
Procedure
Step 1 Follow instructions in 11.8.3 Configuring Point-to-Multipoint LPT to configure point-tomultipoint LPT on NE1, NE2, and NE3.
Parameter
Primary
Function Point
Issue 03 (2012-11-30)
NE1
NE2
NE3
Point Type
UNI
UNI
UNI
Port
21-PETF8-1
21-PETF8-1
2-PEG8-1
435
Parameter
11 LPT
NE1
NE2
NE3
Second
Function Point 1
Point Type
PW
PW
PW
Port
PW ID: 101
PW ID: 102
PW ID: 101
Second
Function Point 2
Point Type
PW
Port
PW ID: 102
1 (default value)
Switching Mode
Strict Mode
LPT OAM
10 (default value)
LPT Enabled
Enabled
Prerequisites
l
The LPT function must be enabled for the transmission equipment at the local and opposite
ends.
Background Information
NOTE
The following example illustrates the verification on point-to-multipoint LPT. Point-to-point LPT is
verified in a similar way.
As shown in Figure 11-6, disconnect the UNI-side link between the RNC and NE1, and the
NNI-side links between NE1 and NE2 and between NE1 and NE3. Check whether the
communication between the RNC and the NodeBs is in normal status. By using this method,
you can verify whether the LPT function is successfully configured.
Issue 03 (2012-11-30)
436
11 LPT
NNI-side link
NodeB 1
PW
UNI-side link
NE 2
PSN
PW
NodeB 2
NE 1
RNC
NE 3
Backup network
NodeB 3
Procedure
Step 1 On the Main Topology, right-click the icon of the required NE and then choose NE Explorer
from the shortcut menu.
Step 2 In the NE Explorer, select the required NE from the Object Tree and then choose
Configuration > Packet Configuration > LPT Management from the Function Tree.
Step 3 Click the Point-to-Multipoint LPT tab. In the lower right corner, click Query. Check whether
the attributes of the primary function point and the secondary function point are correctly set.
Step 4 Disconnect the UNI-side link between the RNC and NE1.
Step 5 Query the alarm list of NE2 and NE3. If the LPT_CLOSE_PORT alarm is reported and the RNC
communicates normally with the NodeBs, you can infer that the LPT switching is successful.
Step 6 Disconnect the NNI-side links between NE1 and NE2 and between NE1 and NE3.
Step 7 Query the alarm list of NE1, NE2, and NE3. If the LPT_CLOSE_PORT alarm is reported and
the RNC communicates normally with the NodeBs, you can infer that the LPT switching is
successful.
----End
Issue 03 (2012-11-30)
437
11 LPT
11.13 Troubleshooting
To locate and rectify the common faults in a timely manner, you need to get familiar with the
fault symptoms and the troubleshooting flow. This topic describes how to handle common LPT
faults by using point-to-multipoint LPT as an example.
Symptom
In the case of the point-to-multipoint LPT networking as shown in Figure 11-7, the common
fault symptom is as follows: When the link or network between the RNC and the NodeBs
becomes faulty, services between the RNC and the NodeBs cannot be switched to the backup
network.
Figure 11-7 Networking diagram of point-to-multipoint LPT
NNI-side link
NodeB 1
PW
UNI-side link
NE 2
PSN
PW
NodeB 2
NE 1
RNC
NE 3
Backup network
NodeB 3
Possible Causes
l
NEs on the entire network do not adopt the same switching mode. (In this example, all the
NEs are assumed to adopt the strict mode.)
The primary function point and the secondary function point are not correctly set.
Issue 03 (2012-11-30)
438
11 LPT
Procedure
Step 1 On the Main Topology, right-click the NE1 icon and then choose NE Explorer from the shortcut
menu.
Step 2 In the NE Explorer, select NE1 and then choose Configuration > Packet Configuration > LPT
Management from the Function Tree.
Step 3 Click the Point-to-Multipoint LPT tab, and then clink Query.
1.
2.
Check the value of Switching Mode. If it is set to Non-strict Mode, set it to Strict
Mode.
3.
Check the value of Point Type for the primary function point. If it is set to PW, reconfigure
the point-to-multipoint LPT and set Point Type of the primary function point to UNI.
Step 4 Repeat Steps 1 to 3 to check whether the configurations of NE2 and NE3 are correct.
NOTE
Point Type of the primary function point for NE2 and NE3 must be set to PW. If Point Type is set to
UNI, reconfigure the point-to-multipoint LPT.
Step 5 If the fault persists, contact Huawei technical support engineers to rectify the fault.
----End
Description
LPT_CFG_CLOSEPORT
Issue 03 (2012-11-30)
439
11 LPT
Value Range
Description
Binding
Status
Unbound, Bound
NOTE
Before binding point-to-point LPT to a service, ensure that
the service has been configured. The default parameter
value is Unbound.
Primary
Function
Point
Second
Function
Point
Type
PW, UNI
Default value: PW
Second
Function
Point
Issue 03 (2012-11-30)
440
11 LPT
Paramet
er
Value Range
Description
LPT
Status
LPT
Enabled
Enabled, Disabled
Default value: Disabled
Recovery
Time (s)
1 to 600
Default value: 1
Issue 03 (2012-11-30)
441
11 LPT
Paramet
er
Value Range
Description
Hold-Off
Time
(ms)
0 to 10000
Switchin
g Mode
Value description:
Recommendations:
If there are many secondary LPT points and these
points do not carry important services, the strict LPT
switching mode is recommended to prevent a single
node fault from switching the entire network. If there
are only a few secondary LPT points and these points
carry important services, the non-strict LPT
switching mode is recommended.
Issue 03 (2012-11-30)
442
11 LPT
Paramet
er
Value Range
Description
Fault
Detection
Mode
Value description:
l LPT OAM: The LPT protocol components
exchange LPT protocol packets between LPT NEs
on a service network and determine the status of
the LPT links between the NEs based on the
negotiation result.
l PW OAM: The LPT protocol components
exchange LPT protocol packets between LPT NEs
on a service network and determine the status of
the LPT links between the NEs based on the
negotiation result and the statuses of PWs carried
on the LPT links. If negotiation using LPT packets
fails or PWs are faulty, the LPT protocol
components consider that the LPT links are faulty.
The LPT protocol components consider that LPT
links are working properly only when negotiation
using LPT packets is successful and PWs are
functional.
Recommendations:
The LPT OAM detection mechanism uses LPT
protocol packets to detect faults at a minimum
interval of 1s. The PW OAM detection mechanism
detects faults based on PW OAM and at a minimum
interval of 3.3 ms. When using the minimum
detection period of 3.3 ms, the PW OAM detection
mechanism supports faster LPT switching but
consumes more system resources than the LPT OAM
detection mechanism.
Fault
Detection
Period
(100 ms)
3 to 100
UserSide Port
Status
CLOSE, OPEN
Default value: 10
Issue 03 (2012-11-30)
443
11 LPT
Value Range
Description
Primary
Function
Point
Type
UNI, PW
The primary LPT point is the entity that runs the LPT
protocol. It determines the LPT state based on the
running information at each LPT point.
In point-to-multipoint LPT configuration, there are
one primary point and several secondary points,
which correspond to the root node and leaf nodes in
a networking topology.
Value description:
l UNI: A primary LPT point is a UNI port.
l PW: A primary LPT point is a PW on the network
side.
Primary
Function
Point
Board
Slot number-board
Primary
Function
Point
Port
Slot number-board
name-port number
Primary
Function
Point
Type
PW, UNI
Default value: PW
Primary
Function
Point ID
Issue 03 (2012-11-30)
444
11 LPT
Paramet
er
Value Range
Description
Second
Function
Point
Type
UNI, PW
Second
Function
Point
Board
NOTE
Issue 03 (2012-11-30)
445
12 MC-LAG
12
MC-LAG
446
12 MC-LAG
An MC-LAG can work with PW APS on the NNI side to implement dual-homing of E-Line
services. This topic describes how to configure an MC-LAG in a dual-homing scenario wherein
AC-side links transmit data.
12.10 Relevant Alarms and Performance Events
This topic describes the alarms and performance events that are related to the MC-LAG.
12.11 Parameter Description: MC-LAG
This topic describes the parameters required for configuring MC-LAG.
Issue 03 (2012-11-30)
447
12 MC-LAG
12.1 Introduction
MC-LAG allows aggregating links of multiple NEs into one link aggregation group (LAG).
When a link or an NE fails, MC-LAG automatically switches the services to another available
link in the same LAG.
Definition
By means of the MC-LAG control protocol, MC-LAG allows aggregating multiple inter-chassis
data links that are connected to the same device to provide a more reliable connection.
Application
As shown in Figure 12-1, services from the NodeB are transmitted to the RNC over the PSN;
NE1 and NE2 work with the RNC to provide MC-LAG protection for services.
Figure 12-1 MC-LAG application in dual-homing
AC side
NNI side
NE1
LAG1
NE3
PW APS
A
PSN
RNC
MC-LAG
NodeB
S
LAG3
LAG2
NE2
Multi-chassis synchronous
communication
A
S
Single-chassis (SC) LAGs on NE1 and NE2, that is, LAG1 and LAG2
NE1 and NE2 communicate with each other by means of the inter-chassis synchronous
communication tunnel. Specifically, the two dual-homed NEs periodically exchange
information about the status of LAG1 and LAG2 and negotiate the active/standby status of LAG1
and LAG2 according to the fault.
Issue 03 (2012-11-30)
448
12 MC-LAG
Purpose
On a 3G bearer network, a fault on the AC link that connects the convergence NE and the RNC
can interrupt all the services from the NodeBs connected to the RNC. To solve this problem, a
more reliable connection between the convergence NE and the RNC is required. In actual
application, the RNC can be connected to two convergence NEs and configuration of MC-LAG
enables mutual backup of the two LAGs configured on the two convergence NEs for a highly
reliable connection.
The LACP protocol provides the data switching equipment with a standard negotiation
mode to form an aggregation link. After the LACP protocol is enabled, the system
automatically aggregates multiple links according to its configuration and enables the
aggregation link to transmit and receive data.
The LACP protocol helps to maintain the status of the aggregation link after the aggregation
link is formed. When aggregation conditions change, the LACP protocol automatically
adjusts or releases the aggregation link.
LAG
A LAG allows aggregating multiple links that are attached to the same equipment to increase
bandwidth and availability of links. The aggregated links can be considered as a single logical
link.
A LAG aggregates multiple physical links to form a logical link at a higher rate. Link aggregation
functions between adjacent equipment and is independent of the network topology. Link
aggregation is also called port aggregation because one link corresponds to one port in Ethernet
transmission.
Loading sharing
Each member link in a LAG carries traffic. That is, the member links in the LAG share the
load. In the load sharing mode, the link bandwidth is increased. When a member in a LAG
changes or when a certain link fails, the traffic is re-allocated automatically.
Load non-sharing
Only one member link in a LAG carries traffic, and the other links in the LAG are in the
standby state. This is equivalent to a hot standby mechanism. That is, when the active link
Issue 03 (2012-11-30)
449
12 MC-LAG
in a LAG fails, the system chooses a link from the standby links in the LAG, and the chosen
link replaces the failed link. In this manner, the link failure does not affect services.
12.3 Specifications
This section describes the capacity of the OptiX OSN equipment for supporting MC-LAG.
Table 12-1 provides specifications associated with MC-LAG.
Table 12-1 Specifications associated with MC-LAG
Item
Specifications
Support capability
64
Load non-sharing
Revertive Mode
Revertive
Non-revertive
Switching Protocol
LACP protocol
Switching time
l Manual aggregation:
The working port is faulty.
The port working mode changes.
l Static aggregation:
The working port is faulty.
The port working mode changes.
The port priority changes.
The system priority changes.
450
12 MC-LAG
IEEE 802.3ad Carrier sense multiple access with collision detection (CSMA/CD) access
method and physical layer specifications
12.5 Availability
The MC-LAG function requires the support of the applicable equipment and boards.
Version Support
Product Name
Applicable Version
U2000
Product Name
Applicable Version
U2000
Hardware Support
Issue 03 (2012-11-30)
Board Type
Applicable Equipment
Version
Applicable Equipment
N2PEX1
N1PEX2
N1PEG8
N1PETF8
N1PEFF8
N1EDQ41
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
451
12 MC-LAG
Issue 03 (2012-11-30)
Applicable
Object
Remarks
MC-LAG
MC-LAG and
SNCP
Load sharing
mode
Multi-chassis
Synchronizati
on Protocol
(MCSP)
channel
MCSP channel
MCSP channel
452
12 MC-LAG
Applicable
Object
Remarks
MCSP channel
MCSP channel
Maintenance Principles
None.
12.7 Principles
This section describes the principles for creating MC-LAGs and for performing the MC-LAG
protection switching using the MC-LAG implementation model.
Implementation Model
The MC-LAG supports only the non-load sharing mode and enables active/standby protection
for Ethernet links. Figure 12-2 shows the MC-LAG implementation model.
Figure 12-2 MC-LAG implementation model
NE1
LAG1
Active
LAG3
MCSP
Router
Standby
LAG2
NE2
Issue 03 (2012-11-30)
453
12 MC-LAG
Creating MC-LAGs
l
1.
Compares the total bandwidth of LAG1 and that of LAG2 and allocates the smaller
system ID between the system IDs of LAG1 and LAG2 to the LAG with a larger
bandwidth. By doing so, the LAG with a larger bandwidth functions as active and
the LAG with a smaller bandwidth functions as standby.
2.
Compares the system IDs of LAG1 and LAG2 if LAG1 and LAG2 have the same
total bandwidth. The LAG with a smaller system ID functions as active and the
LAG with a larger system ID functions as standby.
3.
The selected LAG determines the specific member links for carrying services
based on the priorities of its member ports and port status, and the negotiation
results with LAG3.
In manual aggregation mode, only one member link can be configured for each LAG on the devices.
In manual aggregation mode, LAG3 cannot communicate with LAG1 or LAG2. The MC-LAG
selects the working link based on the LAG configurations on each device. Therefore, when
configuring LAG1, LAG2, and LAG3 that is separately interconnected with LAG1 and LAG2, ensure
that two interconnected LAGs select the same link to carry services.
Issue 03 (2012-11-30)
454
12 MC-LAG
The local MC-LAG compares the local LAG information with the peer LAG information
by exchanging MC-LAG packets with the peer MC-LAG as follows:
Compares the total bandwidth of LAG1 and that of LAG2. Ports in the LAG with a
larger bandwidth function as working and ports in the LAG with a smaller bandwidth
function as standby.
Compares the device MAC addresses if LAG1 and LAG2 have the same total
bandwidth. Ports in the LAG with a smaller MAC address function as working and ports
in the LAG with a larger MAC address function as standby.
During MC-LAG switching, Ethernet services are switched to the standby connection for
forwarding.
Issue 03 (2012-11-30)
455
12 MC-LAG
Background Information
l
Before configuring MC-LAG, you must configure SC-LAGs on the dual-homed nodes.
Loading Sharing of the LAGs on the dual-homed nodes must be set to the same.
LAG Type of the LAGs on the dual-homed nodes and the RNC must be set to the same.
Revertive Mode of the MC-LAG on the dual-homed nodes must be set to the same.
It is recommended that you configure a static LAG, because it has a higher reliability.
Timeout Time of one dual-homed node must be greater than Hello Packet Sending
Interval of the other node. Generally, the two parameters on the two dual-homed nodes
are set to the same values.
On the dual-homed nodes, the tunnel in which the multi-chassis synchronization protocol
is run must be in the bidirectional tunnel that carries the multi-chassis synchronous
communication information.
Prerequisites
l
A bidirectional tunnel must be configured between the dual-homed nodes for multi-chassis
synchronous communication.
Procedure
Step 1 In the NE Explorer, select the required NE and then choose Configuration > Packet
Configuration > Synchronization Protocol Management from the Function Tree.
Step 2 Click New. In the Create Cross-Equipment Synchronization Protocol dialog box that is
displayed, set the parameters of multi-chassis synchronous communication.
Issue 03 (2012-11-30)
456
12 MC-LAG
NOTE
Step 3 Click OK. The Operation Result dialog box is displayed, indicating that the operation is
successful.
Step 4 In the Operation Result dialog box that is displayed, click Close.
Step 5 Perform Step 1 to Step 4 to set multi-chassis synchronous communication parameters on the
other dual-homed node.
----End
Prerequisites
l
The LAGs must be configured on the dual-homed nodes that are connected to the RNC.
Procedure
Step 1 In the NE Explorer, select the required NE, and choose Configuration > Packet
Configuration > Interface Management > Link Aggregation Group Management from the
Function Tree.
Step 2 Click the Cross-Equipment Link Aggregation Groups tab.
Step 3 Click New. In the Create Cross-Equipment Link Aggregation Group dialog box that is
displayed, set the parameters of the MC-LAG.
Step 4 Click OK. The Operation Result dialog box is displayed, indicating that the operation is
successful.
Step 5 In the Operation Result dialog box, click Close.
Step 6 Perform Step 1 to Step 5 to set the MC-LAG parameters on the other dual-homed node.
----End
Issue 03 (2012-11-30)
457
12 MC-LAG
NNI side
NE1
NE3
LAG1
1:1
PW APS
A
PSN
MC-LAG
NodeB
S LAG3
RNC
LAG2
NE2
Multi-chassis synchronous
communication
A
S
Service Planning
Table 12-2 lists the LAG parameters of NE1 and NE2 on the AC side (RNC side).
Issue 03 (2012-11-30)
458
12 MC-LAG
NE1
NE2
LAG No.
LAG Name
LAG1
LAG2
LAG Type
Static
Static
Load Sharing
Non-Sharing
Non-Sharing
System Priority
Main Board
2-N1PEG8
2-N1PEG8
Main Port
1(PORT-1)
1(PORT-1)
2-N1PEG8-2(PORT-2)
2-N1PEG8-2(PORT-2)
Port Priority
2-N1PEG8-2(PORT-1): 1
2-N1PEG8-2(PORT-1): 1
2-N1PEG8-2(PORT-2): 2
2-N1PEG8-2(PORT-2): 2
Table 12-3 lists the inter-equipment synchronous communication protocol parameters of MCLAG on the AC side (RNC side).
Table 12-3 Planning of inter-chassis synchronous communication protocol parameter
Parameter
NE1, NE2
Protocol Channel ID
10
Timeout Time(s)
100
MPLS Tunnel
Tunnel
410
Table 12-4 lists the MC-LAG parameters of NE1 and NE2 on the AC side (RNC side).
Table 12-4 MC-LAG parameter planning for NE1 and NE2
Issue 03 (2012-11-30)
Parameter
NE1
NE2
Cooperative Channel ID
10
10
459
12 MC-LAG
Parameter
NE1
NE2
Non-load-sharing
Non-load-sharing
Restoration Mode
Revertive
Revertive
Prerequisites
l
You must be familiar with the networking, requirements, and service planning of the
example.
Procedure
Step 1 Configure the multi-chassis synchronous communication function on NE1 and NE2.
1.
In the NE Explorer, select NE1, and then choose Configuration > Packet
Configuration > Synchronization Protocol Management from the Function Tree.
2.
Click New, In the Create Cross-Equipment Synchronization Protocol dialog box that
is displayed, set the parameters of multi-chassis synchronous communication.
Parameter
NE1, NE2
Protocol Channel ID
10
Timeout Time(s)
100
MPLS Tunnel
Tunnel
410
3.
Click OK. The Operation Result dialog box is displayed, indicating that the operation is
successful.
4.
5.
Perform Step 1.1 to Step 1.4 to set the multi-chassis synchronous communication
parameters on NE2.
In the NE Explorer, select NE1, and choose Configuration > Packet Configuration >
Interface Management > Link Aggregation Group Management from the Function
Tree.
2.
Issue 03 (2012-11-30)
460
3.
12 MC-LAG
Click New. In the Create Link Aggregation Group dialog box that is displayed, set the
parameters of the LAG.
Table 12-5 lists the LAG parameters of NE1 on the AC side (RNC side).
Table 12-5 LAG parameter planning for NE1
Parameter
NE1
LAG No.
LAG Name
LAG1
LAG Type
Static
Load Sharing
Non-Sharing
System Priority
Main Board
2-N1PEG8
Main Port
1(PORT-1)
2-N1PEG8-2(PORT-2)
4.
Click OK. The Operation Result dialog box is displayed, indicating that the operation is
successful.
5.
6.
NE1
Port Priority
2-N1PEG8-2(PORT-1): 1
2-N1PEG8-2(PORT-2): 2
7.
Click OK. The Operation Result dialog box is displayed, indicating that the operation is
successful.
8.
9.
Issue 03 (2012-11-30)
Parameter
NE2
LAG No.
LAG Name
LAG2
461
12 MC-LAG
Parameter
NE2
LAG Type
Static
Load Sharing
Non-Sharing
System Priority
Main Board
2-N1PEG8
Main Port
1(PORT-1)
2-N1PEG8-2(PORT-2)
Port Priority
2-N1PEG8-2(PORT-1): 1
2-N1PEG8-2(PORT-2): 2
In the NE Explorer, select NE1, and choose Configuration > Packet Configuration >
Interface Management > Link Aggregation Group Management from the Function
Tree.
2.
3.
Click New. In the Create Cross-Equipment Link Aggregation Group dialog box that is
displayed, set the parameters of the MC-LAG.
Parameter
NE1
Cooperative Channel ID
10
Non-load-sharing
Restoration Mode
Revertive
4.
Click OK. The Operation Result dialog box is displayed, indicating that the operation is
successful.
5.
6.
Issue 03 (2012-11-30)
Parameter
NE2
Cooperative Channel ID
10
Non-load-sharing
Restoration Mode
Revertive
462
12 MC-LAG
----End
Meaning
MCLAG_CFG_MISMATCH
MCSP_PATH_LOCV
Issue 03 (2012-11-30)
463
12 MC-LAG
Value Range
Description
Protocol Channel ID
1-15
For example, 1
NOTE
This parameter is not applicable
to an MC-LAG.
1-10
Default: 1
30-3600
Default: 600
MPLS Tunnel
Tunnel
Issue 03 (2012-11-30)
464
12 MC-LAG
Value Range
Description
LAG No.
For example, 1
NOTE
The value is an integer ranging
from 1 to 64.
LAG Name
LAG Type
Manual, Static
Default: Static
Switch Protocol
Switch Mode
Issue 03 (2012-11-30)
465
12 MC-LAG
Field
Value Range
Description
Revertive Mode
Revertive, Non-Revertive
Default: Non-Revertive
Load Sharing
Sharing, Non-Sharing
Default: Non-Sharing
System Priority
0 to 65535
Default: 32768
WTR Time(min)
1 to 30
Default: 10
Issue 03 (2012-11-30)
466
12 MC-LAG
Value Range
Description
Port
Port Priority
0-65535
Default: 32768
Issue 03 (2012-11-30)
467
13 MPLS PW APS
13
MPLS PW APS
468
13 MPLS PW APS
This section uses an example to describe the configuration of PW FPS according to network
planning.
13.11 Relevant Alarms and Performance Events
This topic describes the alarms and performance events that are related to the MPLS PW APS.
13.12 Parameter Description: MPLS PW APS
This topic describes the parameters required for configuring PW APS.
Issue 03 (2012-11-30)
469
13 MPLS PW APS
13.1 Introduction
PW APS provides network-level protection. In PW APS, a protection PW is created to protect
the working PW once it fails. PW APS is available in three types: PW APS 1+1, PW APS 1:1,
and PW Fast Protection Switching (FPS).
Definition
PW APS 1+1 protection
Generally, the source dually transmits services to the working PW and protection PW and the
sink receives the services from the working PW. When the working PW is faulty, the sink
receives the services from the protection PW.
PW APS 1:1 protection
Generally, the source transmits services only to the working PW and the sink receives the
services from the working PW. When the working PW is faulty, the source transmits services
to the protection PW and the sink receives the services from the protection PW.
PW FPS protection
Generally, the source transmits services only to the working PW and the sink receives the
services from the working PW. When the working PW is faulty, the source transmits services
to the protection PW and the sink receives the services from the protection PW. PW FPS
generally works with the IWF function to provide end-to-end protection.
Application
Application of PW APS 1+1/1:1 protection
As shown in Figure 13-1, a PW APS protection group is configured between PE1 and PE2.
Generally, services are transmitted on the working PW. When the working PW is faulty, APS
switching occurs and the services are switched over to the protection PW.
Figure 13-1 Networking diagram of PW APS 1+1/1:1 (same source and same sink)
PSN
NodeB
PW APS
PE1
PW1
PW APS
PW2
PE2
PW3
RNC
PW4
NodeB
Working PW
Protection PW
Issue 03 (2012-11-30)
470
13 MPLS PW APS
In actual applications, PW APS can work with MC-LAG to provide multi-chassis protection.
As shown in Figure 13-2, PW APS protection groups are configured between PE1 and PE2 and
between PE1 and PE3. Generally, services are transmitted on the working PW. When the
working PW is faulty, APS switching occurs and the services are switched over to the protection
PW.
Figure 13-2 Networking diagram of PW APS 1+1/1:1 (same source but different sinks)
PE2
PE1
PW APS
A
PSN
MC-LAG
NodeB
RNC
S
PE3
Working PW
Protection PW
Multi-chassis synchronous communication
A
Active (carrying services)
PE1
PW FPS
IWF
PSN
NodeB
IWF
RNC
S
PE3
Working PW
Protection PW
A
S
Issue 03 (2012-11-30)
471
13 MPLS PW APS
Purpose
PW APS provides protection for important working PWs on a network and helps to prevent
against service interruption resulting from a PW failure.
Switching Type
With regard to the location of the protection switching, the PW APS comprises two switching
types, namely, single-ended switching and dual-ended switching.
l
When the PW APS protection group fails at one end, services are switched at the failed
end, and the other end remains unaffected. The single-ended switching is fast and stable.
After the single-ended switching occurs, services are transmitted to the protection PW, but
are received from the working PW.
When the PW APS protection group fails at one end, services are switched at both ends.
The dual-ended switching takes more time. After the dual-ended switching occurs, services
are transmitted to and received from the same PW, which facilitates service management.
Switching Protocol
PW APS uses the automatic protection switching (APS) protocol to coordinate the source and
sink to realize the functions such as protection switching, switching delay, and wait-to-restore.
The APS protocol is transmitted in the protection PW to inform both ends of the protocol state
and the switching state. The equipment at both ends switches services to the proper PW according
to the protocol state and switching state
l
The protocol state indicates whether the APS protocol of the protection group is enabled
currently.
The switching state indicates the current switching state of the APS protocol. The switching
state can be the normal state, WTR state, lockout of protection, or forced switching state.
Revertive Mode
PW APS comprises two revertive modes of protection switching, namely, revertive mode and
non-revertive mode.
l
When the NE is in switching state, the working PW becomes normal and thus the protection
switching request is cleared. During the WTR time, no other switching requests are received
and the NE releases the switching after the WTR time expires. As a result, services are
switched from the protection PW to the working PW.
In the case of the non-revertive mode, services are not switched from the protection PW to
the working PW even after the working PW is restored to normal.
Issue 03 (2012-11-30)
472
13 MPLS PW APS
WTR Time
The WTR time is the period after the working PW is restored to normal and before the NE
releases the switching. To prevent frequent switching events because the working PW is not
stable, it is recommended that you set the WTR time to 5 to 12 minutes.
Hold-Off Time
The hold-off time is the time between declaration of signal degrade or signal fail, and the
initialization of the protection switching algorithm. When different protection schemes are
configured at the same time, the hold-off time can avoid the conflict between them. By default,
the hold-off time of the equipment is 0s.
PW APS Binding
PW APS binding reduces the consumption of APS resources and therefore speeds up batch
service protection switching.
The dual-homing equipment needs to support a large number of PW APS protection groups. If
each protection group initiates a state machine, the resources and capacity of the equipment
cannot meet the requirements of all protection groups.
If the equipment is configured with n PW APS protection groups, APS1, APS2, ... APSn, APS1,
APS2, and APS2 can be bound together. Thus, APS1 is a PW APS protection group, APS2 and
APS3 are subordinate protection pairs. In this manner, APS1, APS2 and APS3 are an example
of PW APS binding.
The state of the subordinate protection pair is affected by the state of the PW APS protection
group.
l
When the PW switching occurs in the PW APS protection group, the PW switching also
occurs in the subordinate protection pair.
When the PW switching occurs in the subordinate protection pair changes, however, the
PW switching does not occur in the PW APS protection.
13.3 Specifications
This section describes capacity of the OptiX OSN equipment for supporting MPLS PW APS.
Table 13-1 lists specifications associated with MPLS PW APS.
Issue 03 (2012-11-30)
473
13 MPLS PW APS
Specification
Capacity
Switching Type
Revertive Mode
l Revertive
l Non-revertive
Switchover Restoration
Time(m)
Switching Protocol
APS
NOTE
PW FPS does not support the APS protocol.
Switching Mode
l Locked switching
l Forced switching
l Automatic switching
l Manual switching
l Exercise switching
Switching Time
Issue 03 (2012-11-30)
474
13 MPLS PW APS
Item
Specification
Switching Condition
(Any of the following
conditions triggers the
switching)
13.5 Availability
The PW APS function requires the support of the applicable equipment and boards.
Version Support
Issue 03 (2012-11-30)
Product Name
Applicable Version
U2000
Product Name
Applicable Version
U2000
475
13 MPLS PW APS
Hardware Support
Table 13-2 Boards that support PW APS 1+1/1:1
Board Type
Applicable Equipment
Version
Applicable Equipment
N1PEX2
N2PEX1
N1PEG8
N1PETF8
N1PEFF8
R1PEFS8
Q1PEGS2
R1PEGS1
R1PEF4F
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
Issue 03 (2012-11-30)
Board Type
Applicable Equipment
Version
Applicable Equipment
N1PEX2
N2PEX1
N1PEG8
N1PETF8
N1PEFF8
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
476
13 MPLS PW APS
Issue 03 (2012-11-30)
Applicable
Object
Remarks
PW APS
PW APS
PW APS
PW APS
Slave
protection pair
of PW APS
PW OAM
477
13 MPLS PW APS
Applicable
Object
Remarks
PW APS and
tunnel APS
PW APS and
MS-PW
PW APS and
E-Line service
or E-LAN
service
PW FPS
Maintenance Principles
Applicable
Object
Remarks
PW FPS and
interworking
function (IWF)
l If an IWF-enabled NE
encounters a fiber cut in the
receive direction, it sends FDI
packets to trigger remote
switching.
l If an IWF-enabled NE
encounters a fiber cut in the
transmit direction, it detects
OAM alarms at the Ethernet port,
and sends FDI packets to trigger
remote switching.
Issue 03 (2012-11-30)
478
13 MPLS PW APS
13.7 Principles
In both PW APS 1+1 and PW APS 1:1, the protection PW protects the services on the working
PW. Specifically, when the working PW fails, services are switched to the protection PW. In
PW APS 1+1 protection, services are dually fed and selectively received; in PW APS 1:1
protection, services are singly fed and received.
The lockout
conditions
are met
The lockout
conditions
are cleared
The automatic/forced/manual
switching conditions are met
NR
The timer
times out
The lockout
conditions are met
SWITCH
The forced/manual
switching conditions are
cleared
The automatic
switching
The
conditions are
protection
cleared
PW fails
The automatic
conditions are met
WTR
Issue 03 (2012-11-30)
State
Description
479
13 MPLS PW APS
State
Description
The service access board on NE1 sends the received service signal to the working and
protection service processing boards.
2.
The working service processing board sends the service signal to the working PW, and the
protection service processing board sends the service signal to the protection PW.
3.
Based on the state machines, the working service processing board on NE2 is selected to
receive the service signal from the working PW and to send the service signal to the service
access board on NE2.
4.
The service access board on NE2 sends the service signal to the opposite equipment.
Figure 13-5 Principle of PW APS 1+1 protection (NE1 -> NE2, before switching)
NE1
NE2
Active
Access
service
Crossconnect processing
board
board
Standby
service
processing
board
Working PW
Subnet
Subnet
Protection
PW/Protocol channel
Access
Standby Crossservice connect
processing board
board
The service detection points at the source and sink ends of the protection group monitor
the status of the link by means of PW OAM packets.
2.
When the service sink (for example, NE2) in a direction detects that a switching condition
(for example, automatic switching condition or external switching condition) is met, the
state machines of the working and protection service processing boards experience state
transitions. After the preset hold-off time elapses, the protection service processing board
Issue 03 (2012-11-30)
480
13 MPLS PW APS
is selected to receive the service signal from the protection PW, and the PW APS event is
reported.
Figure 13-6 Principle of PW APS 1+1 protection (NE1 -> NE2, after switching)
NE1
NE2
Active
Crossservice
connect processing
board
board
Access
Standby
service
processing
board
Working PW
Subnet
Subnet
Protection
PW/Protocol channel
Access
Standby Crossservice
connect
processing
board
board
If the non-revertive mode is set, the signal flow between NE1 and NE2 does not change.
If the revertive mode is set, the signal flow between NE1 and NE2 are restored to the state
before switching.
The service access board on the access side of NE1 sends the received service signal to the
working service processing board.
2.
The active service processing board sends the service signal to the working PW.
Figure 13-7 Principle of PW APS 1:1 protection (NE1 -> NE2, before switching)
NE1
NE2
Active
Access
Cross- service
connect processing
board
board
Standby
service
processing
board
Issue 03 (2012-11-30)
Working PW
Subnet
Subnet
Protection
PW/Protocol channel
Access
481
13 MPLS PW APS
The service detection points at the source and sink ends of the protection group monitor
the status of the link by means of PW OAM packets.
2.
When the service sink (for example, NE2) in a direction detects that a switching condition
(for example, automatic switching condition or external switching condition) is met, the
state machines of the working and protection service processing boards experience state
transitions. After the preset hold-off time elapses, the protection service processing board
is selected to receive the service signal from the protection PW.
3.
The sink end sends the SF signal to the service source (for example, NE1) in the protection
PW.
4.
The source end bridges the working PW onto the protection PW and sends the service signal
to the protection PW.
Figure 13-8 Principle of the PW APS 1:1 dual-ended protection (NE1 -> NE2, after switching)
NE1
NE2
Active
Access
Cross- service
connect processing
board
board
Standby
service
processing
board
Working PW
Subnet
Subnet
Protection
PW/Protocol channel
Access
If the non-revertive mode is set, the signal flow between NE1 and NE2 does not change.
If the revertive mode is set, the signal flow between NE1 and NE2 are restored to the state
before switching.
482
13 MPLS PW APS
1.
The route that the services from NE1 to NE4 take is as follows: NE1 -> NE2 -> NE4.
2.
The service access board on the access side of NE1 sends the received service signal to the
active service processing board.
3.
The active service processing board of NE1 sends the service signal to the working PW.
4.
NE1, NE2, and NE3 monitor the status of the working and protection PWs by means of
PW OAM packets.
Figure 13-9 Principle of PW FPS protection (NE1 -> NE4, before switching)
NE2
Active
service
processing
board
NE1
Access
Active
service
Crossconnect processing
board
board
Standby
service
processing
board
Access
Working PW
Subnet
Subnet
Protection PW
NE4
Cross-connect
board
Access
NE3
Access
Standby
service
processing
board
Access
Cross-connect
board
On detecting a PW defect, NE2 sends a BDI packet on the reverse PW to notify NE1 of
the PW defect.
2.
On receiving the BDI packet from NE2, NE1 selects the standby service processing board
to send the service signal to the protection PW after the preset hold-off time elapses.
3.
After switching, the route that the services from NE1 to NE4 take is as follows: NE1 ->
NE3 -> NE4.
Issue 03 (2012-11-30)
483
13 MPLS PW APS
Figure 13-10 Principle of PW FPS protection (NE1 -> NE4, after switching)
NE2
Active
service
processing
board
NE1
Access
Active
service
Crossconnect processing
board
board
Standby
service
processing
board
Access
Working PW
NE4
Subnet
Subnet
Protection PW
Cross-connect
board
Access
NE3
Access
Standby
service
processing
board
Access
Cross-connect
board
If the non-revertive mode is set, the signal flow between NE1 and NE4 does not change.
If the revertive mode is set, the signal flow between NE1 and NE4 is restored to the state
before switching.
Automatic Switching
Automatic switching can be triggered by SF conditions. The request packet of automatic
switching is reported to the APS protocol processing module by means of interrupts. Table
13-5 lists the conditions triggering automatic switching.
Table 13-5 Conditions triggering automatic switching
Switching Condition
Priority
Issue 03 (2012-11-30)
484
13 MPLS PW APS
Switching Condition
Priority
External Switching
External switching is triggered by the switching commands issued by users. Table 13-6 lists the
conditions triggering external switching.
Table 13-6 Conditions triggering external switching
Switching Condition
Priority
Description
Clear switching
The switching
conditions are
arranged in a
descending
order or priority.
Lockout of protection
Forced switching
Issue 03 (2012-11-30)
485
Switching Condition
Manual
switching
Manual
switching to
working
13 MPLS PW APS
Priority
Description
If an NE is in a switching state with a
higher priority, manual switching cannot
be performed.
Otherwise, services are switched from the
protection PW to the working PW.
l If the working PW works properly,
manual switching occurs.
l If the working PW fails, manual
switching does not occur.
In revertive mode, services cannot be
manually switched to the working PW.
Manual
switching to
protection
Exercise switching
Switching Rules
PW APS switching complies with the following rules:
l
In a descending order of priority, the switching commands are arranged as follows: clear
switching, lockout of protection, forced switching, automatic switching, manual switching,
and exercise switching.
486
13 MPLS PW APS
Prerequisites
l
Context
PW APS can be configured for CES services, E-Line services, and E-LAN services carried by
PWs. A PW APS protection group can be created when services are initially configured or after
services are configured.
The PW APS protection group must be created on the source NE and sink NE.
NOTE
This topic considers creating a PW APS protection group for E-Line services as an example. The
configuration method provides a reference for creating PW APS protection groups for other services.
For details on how to create CES services, E-Line services, and E-LAN services, see "Configuring CES
Services", "Configuring E-Line Services", and "Configuring E-LAN Services" in the Configuration Guide
(Packet Transport Domain) .
Procedure
l
Issue 03 (2012-11-30)
When the service is configured initially, set Protection Type to PW APS, create the
working PW and protection PW, and create a PW APS protection group.
1.
In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > Ethernet Service Management > E-Line Service from the
Function Tree.
2.
3.
In the New E-Line Service dialog box that is displayed, set the E-Line service
parameters according to service planning. To be specific, set Direction to UNI-NNI
and Protection Type to PW APS.
487
13 MPLS PW APS
4.
5.
In the Basic Attributes tab, set the parameters associated with the working PW and
protection PW according to network planning.
6.
In the Protect Group tab, set the parameters associated with the PW APS protection
group.
CAUTION
It is recommended that you set Enabling Status to Disabled during the configuration
process. Enable the protocol only after the APS protection group is successfully
created on the NEs at both ends. If the APS protocol is first enabled at the local NE
and then at the opposite NE, the opposite NE may fail to receive services properly.
Issue 03 (2012-11-30)
488
7.
13 MPLS PW APS
Set Detection Packet Type to FFD, and Detection Packet Period(ms) to 3.3 ms for
PW OAM of the working PW and protection PW.
CAUTION
Detection Packet Period(ms) is set to 3.3 ms to ensure that the protection switching
time is within 50 ms. When the delay jitter is large on the live network, the packet
detection period of PW OAM needs to be longer than the maximum network delay
jitter. Otherwise, PW APS switching may occur frequently.
Issue 03 (2012-11-30)
8.
Click OK.
9.
Perform the preceding operations to create the PW APS protection group for the
opposite NE. Then, enable the APS protocol at both ends.
When the opposite NE is already configured with services, create the PW APS protection
group in the Protect Group tab.
1.
In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > Ethernet Service Management > E-Line Service from the
Function Tree.
2.
3.
Select the required E-Line service, and click PW APS in the Protect Group tab.
4.
Click New in the lower right corner of the pane. The Configure PW dialog box is
displayed.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
489
13 MPLS PW APS
5.
In the Basic Attributes tab, set the parameters associated with the working PW and
protection PW according to network planning.
6.
Optional: In the QoS tab, set the QoS attributes of the protection PW according to
network planning.
NOTE
When Bandwidth Limit is set to Enabled, you can set CIR(kbit/s) and PIR(kbit/s).
In the created PW policy template, click Policy.
7.
In the Protect Group tab, set the parameters associated with the PW APS protection
group.
CAUTION
It is recommended that you set Enabling Status to Disabled during the configuration
process. Enable the protocol only after the APS protection group is successfully
created on the NEs at both ends. If the APS protocol is first enabled at the local NE
and then at the opposite NE, the opposite NE may fail to receive services properly.
Issue 03 (2012-11-30)
490
8.
13 MPLS PW APS
Set Detection Packet Type to FFD, and Detection Packet Period(ms) to 3.3 ms for
PW OAM of the working PW and protection PW.
CAUTION
Detection Packet Period(ms) is set to 3.3 ms to ensure that the protection switching
time is within 50 ms. When the delay jitter is large on the live network, the packet
detection period of PW OAM needs to be longer than the maximum network delay
jitter. Otherwise, PW APS switching may occur frequently.
9.
Click OK.
10. Perform the preceding operations to create the PW APS protection group for the
opposite NE. Then, enable the APS protocol at both ends.
----End
Result
Check whether the working PW and protection PW work properly.
1.
In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > MPLS Management > PW Management from the Function Tree.
2.
In the PW Management tab, click Query. Then, close the Operation Result dialog box.
3.
View Local Operating Status and Remote Operating Status of the working PW and
protection PW.
Issue 03 (2012-11-30)
491
13 MPLS PW APS
Follow-up Procedure
Deleting a PW APS protection group
Exercise caution when performing this operation, because the service will be unprotected after
the protection group is deleted. To delete a created PW APS protection group, do as follows:
1.
2.
Select the required E-Line service, and click PW APS in the Protect Group tab.
3.
Select the required PW APS protection group, and click Delete in the lower right corner
of the pane.
Prerequisites
l
Context
PW APS slave protection pairs can be configured for CES services, E-Line services, E-LAN
services carried by PWs, and the PW APS slave protection pairs can be bound with the created
PW APS protection group. PW APS binding can be configured when services are initially
configured or after services are configured.
The binding of PW APS slave protection pairs requires addition of slave protection pairs to the
source NE and sink NE.
NOTE
This topic considers creating slave protection pairs for E-Line services as an example. The configuration
method provides a reference for creating slave protection pairs for other services.
For details on how to create CES services, E-Line services, and E-LAN services, see "Configuring CES
Services", "Configuring E-Line Services", and "Configuring E-LAN Services" in the Configuration Guide
(Packet Transport Plane).
Issue 03 (2012-11-30)
492
13 MPLS PW APS
Procedure
l
Issue 03 (2012-11-30)
When the service is configured initially, set Protection Type to Slave Protection Pair,
create the working PW and protection PW, and bind the slave protection pair with the
created the PW APS protection group.
1.
In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > Ethernet Service Management > E-Line Service from the
Function Tree.
2.
3.
Set the E-Line service parameters according to service planning. To be specific, set
Direction to UNI-NNI and Protection Type to Slave Protection Pair.
4.
5.
In the Basic Attributes tab, set the parameters associated with the working PW and
protection PW according to network planning.
6.
In the Protect Group tab, select the protection group for the slave protection pair.
7.
Click OK.
When the NE is already configured with services, add the slave protection pair in the
Protect Group tab.
1.
In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > Ethernet Service Management > E-Line Service from the
Function Tree.
2.
3.
Select the required E-Line service, and click Slave Protection Pair in the Protect
Group tab.
4.
Click New in the lower right corner of the pane. The Configure PW dialog box is
displayed.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
493
13 MPLS PW APS
5.
In the Basic Attributes tab, set the parameters associated with the working PW and
protection PW according to network planning.
6.
Optional: In the QoS tab, set the QoS attributes of the protection PW according to
network planning.
NOTE
When Bandwidth Limit is set to Enabled, you can set CIR(kbit/s) and PIR(kbit/s).
In the created PW policy template, click Policy.
7.
In the Protect Group tab, select the protection group for the slave protection pair.
8.
Click OK.
----End
Result
Check whether the working PW and protection PW work properly.
1.
In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > MPLS Management > PW Management from the Function Tree.
2.
In the PW Management tab, click Query. Then, close the Operation Result dialog box.
3.
View Local Operating Status and Remote Operating Status of the working PW and
protection PW.
l If the status parameters are Up, the PW works properly.
l If the status parameters are Down, the PW fails.
Follow-up Procedure
Exercise caution when performing this operation, because the service will be unprotected after
the slave protection pair is deleted. To delete a created slave protection pair, do as follows:
1.
2.
Select the required E-Line service, and click Slave Protection Pair in the Protect
Group tab.
Issue 03 (2012-11-30)
494
3.
13 MPLS PW APS
Select the required slave protection pair, and click Delete in the lower right corner of the
pane.
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > APS Protection Management from the Function Tree.
Step 2 In the PW APS Management tab, click Query. The created PW APS protection group is
displayed.
Step 3 Right-click the PW APS protection group, and then choose Start Protocol from the shortcut
menu.
Step 4 A dialog box is displayed, indicating that the operation is successful. It shows that the value of
Protocol Status of the PW APS protection group becomes Enabled.
Step 5 In the NE Explorer of the opposite NE, perform the preceding operations to start the APS
protocol.
----End
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > APS Protection Management from the Function Tree.
Step 2 In the PW APS Management tab, click Query. Generally, the values of Working Path
Status and Protection Path Status are Available, and the value of Switchover Status is
Normal \(No Request for Working\).
Step 3 Select the created protection group, and choose Function > Manual Switching to Protection.
In the Operation Result dialog box that is displayed, click OK.
Issue 03 (2012-11-30)
495
13 MPLS PW APS
Step 4 Click Query. The value of Protection Status becomes Manual Switching\(Working to
Protection\), which indicates that the service is switched to the protection PW.
----End
Prerequisites
l
The MPLS tunnel carrying the working and protection PWs has been created.
A PW FPS protection group can be created when services are initially configured or after
services are configured.
Context
NOTE
For details on how to create E-Line services, see Configuring E-Line Services in the Configuration Guide
(Packet Transport Domain).
Procedure
l
Issue 03 (2012-11-30)
When the service is configured initially, set Protection Type to PW FPS, create the
working PW and protection PW, and create a PW FPS protection group.
1.
In the NE Explorer, select the desired NE, and choose Configuration > Packet
Configuration > Ethernet Service Management > E-Line Service from the
Function Tree.
2.
3.
In the New E-Line Service dialog box that is displayed, set the E-Line service
parameters according to service planning. Specifically, set Direction to UNI-NNI,
and Protection Type to PW FPS.
496
13 MPLS PW APS
4.
5.
In the Basic Attributes tab, set the parameters associated with the working PW and
protection PW according to network planning.
6.
In the Protection Group tab, set the parameters associated with the PW FPS
protection group. For details, see 13.12.1 Parameter Description: PW APS .
7.
In the PW OAM tab, set Detection Packet Type to FFD and Detection Packet
Period(ms) to 3.3 ms for PW OAM of the working PW and protection PW.
CAUTION
Detection Packet Period(ms) is set to 3.3 ms to ensure that the protection switching
time is within 50 ms. When the delay variation is large on the live network, the packet
detection period of the PW OAM needs to be larger than the maximum network delay
variation. Otherwise, PW FPS switching may occur frequently.
Issue 03 (2012-11-30)
497
8.
l
13 MPLS PW APS
After the E-Line services are created, create the PW FPS protection group in the Protection
Group tab.
1.
In the NE Explorer, select the desired NE, and choose Configuration > Packet
Configuration > Ethernet Service Management > E-Line Service from the
Function Tree.
2.
3.
Select the desired E-Line service, and click PW FPS in the Protection Group tab.
4.
Click New in the lower right corner of the pane. The Configure PW dialog box is
displayed.
5.
In the Basic Attributes tab, set the parameters associated with the working PW and
protection PW according to network planning.
6.
Optional: In the QoS tab, set the QoS attributes of the protection PW according to
network planning.
NOTE
When Bandwidth Limit is set to Enabled, you can set CIR(kbit/s) and PIR(kbit/s).
In the created PW policy template, click Policy.
7.
Issue 03 (2012-11-30)
In the Protection Group tab, set the parameters associated with the PW FPS
protection group. For details, see 13.12.1 Parameter Description: PW APS .
498
8.
13 MPLS PW APS
In the PW OAM tab, set Detection Packet Type to FFD and Detection Packet
Period(ms) to 3.3 ms for PW OAM of the working PW and protection PW.
CAUTION
Detection Packet Period(ms) is set to 3.3 ms to ensure that the protection switching
time is within 50 ms. When the delay variation is large on the live network, the packet
detection period of the PW OAM needs to be larger than the maximum network delay
variation. Otherwise, PW FPS switching may occur frequently.
9.
----End
Result
Check whether the working PW and protection PW work properly.
1.
In the NE Explorer, select the required NE and choose Configuration > Packet
Configuration > MPLS Management > PW Management from the Function Tree.
2.
In the PW Management tab, click Query. Then, close the Operation Result dialog box.
3.
View Local Operating Status and Remote Operating Status of the working PW and
protection PW.
l If the status parameters are Up, the PW works properly.
Issue 03 (2012-11-30)
499
13 MPLS PW APS
Follow-up Procedure
Deleting a PW FPS Protection Group
CAUTION
Exercise caution when performing this operation, because the service will be unprotected after
the protection group is deleted.
To delete a created PW FPS protection group, perform the following steps:
1.
Select the desired E-Line service, and click PW FPS in the Protection Group tab.
2.
Select the desired PW FPS protection group, and click Delete in the lower right corner of
the pane.
For rules for setting the PW FPS parameters, see 13.12 Parameter Description: MPLS PW APS.
1.
Select the desired E-Line service, and click PW FPS in the Protection Group tab.
2.
500
13 MPLS PW APS
PSN
NE3
RNC
NE2
NE4
Working PW
NodeB
Protection PW
The same configuration methods are used for the services and protection between NE1 and NE3,
and between NE4 and NE3. Therefore, this example describes the configurations of the services
and protection between NE1 and NE3 only.
As shown in Figure 13-12, one E-Line service, E-Line-1, is transmitted on the trail NodeB
1NE1NE2NE3. E-Line-1 is configured with a PW APS protection group named
APS1; the other E-Line service, E-Line-2, is transmitted on the trail NodeB
2NE1NE2NE3. E-Line-2 is configured with a PW APS slave protection pair that
is bound with APS1.
Figure 13-12 Networking diagram of PW APS between NE1 and NE3
NodeB 1
NE1
MS-PW11_1
MS-PW33_1
MS-PW11_2
MS-PW33_2
MS-PW22_1
MS-PW22_2
MS-PW44_1
NE2
MS-PW44_2
RNC
NE3
NodeB 2
Working PW
Protection PW
PW APS protection
PW APS slave protection pair
Issue 03 (2012-11-30)
501
13 MPLS PW APS
PW and the
Tunnel That
Carries the PW
Between NE1
and NE2
Between NE2
and NE3
Protection
Attribute
E-Line-1
Working PW
MS-PW11_1
MS-PW11_2
MPLS Tunnel 1
MPLS Tunnel 2
PW APS
protection
group: APS1
Protection PW
MS-PW33_1
MS-PW33_2
MPLS Tunnel 3
MPLS Tunnel 4
Working PW
MS-PW22_1
MS-PW22_2
MPLS Tunnel 1
MPLS Tunnel 2
Protection PW
MS-PW44_1
MS-PW44_2
MPLS Tunnel 3
MPLS Tunnel 4
E-Line-2
Slave
protection pair
of APS1
Service Planning
NE1-NE3 are OptiX OSN 3500 NEs in this example. Table 13-8 lists the NE parameter planning.
Table 13-8 NE parameter planning
NE
LSR
ID
Port
Port Attribute
IP
Addre
ss/
Subn
et
Mask
of
Port
Functio
n
NE1
1.0.0.1
21-N1PETF8-1(PORT-1)
UNI
Interface
of ELine-1 on
the
NodeB 1
side
Interface
of ELine-2 on
the
NodeB 2
side
l Port mode:
Layer 2
l Tag: Tag
Aware
21-N1PETF8-2(PORT-2)
Issue 03 (2012-11-30)
502
NE
LSR
ID
13 MPLS PW APS
Port
Port Attribute
IP
Addre
ss/
Subn
et
Mask
of
Port
Functio
n
3-N1PEG8-1(PORT-1)
NNI
10.0.0.
1/255.
255.25
5.0
Interface
that
carries
the
working
PWs
(MSPW11_1
and MSPW22_1)
10.0.2.
1/255.
255.25
5.0
Interface
that
carries
the
protectio
n PWs
(MSPW33_1
and MSPW44_1)
10.0.0.
2/255.
255.25
5.0
Interface
that
carries
the
working
PWs
(MSPW11_1
and MSPW22_1)
10.2.2.
2/255.
255.25
5.0
Interface
that
carries
the
protectio
n PWs
(MSPW33_1
and MSPW44_1)
l Port mode:
Layer 3
l Tag: Tag
Aware
1-N1PEG8-1(PORT-1)
NE2
1.0.0.2
3-N1PEG8-1(PORT-1)
NNI
l Port mode:
Layer 3
l Tag: Tag
Aware
1-N1PEG8-1(PORT-1)
Issue 03 (2012-11-30)
503
NE
NE3
LSR
ID
1.0.0.3
Port
13 MPLS PW APS
IP
Addre
ss/
Subn
et
Mask
of
Port
Functio
n
3-N1PEG8-2(PORT-2)
10.0.1.
1/255.
255.25
5.0
Interface
that
carries
the
working
PWs
(MSPW11_2
and MSPW22_2)
1-N1PEG8-2(PORT-2)
10.0.3.
1/255.
255.25
5.0
Interface
that
carries
the
protectio
n PWs
(MSPW33_2
and MSPW44_2)
Interface
of ELine-1
and ELine-2 on
the RNC
side
10.0.1.
2/255.
255.25
5.0
Interface
that
carries
the
working
PWs
(MSPW11_2
and MSPW22_2)
3-N1PEG8-2(PORT-2)
Port Attribute
UNI
l Port mode:
Layer 2
l Tag: Tag
Aware
3-N1PEG8-1(PORT-1)
NNI
l Port mode:
Layer 3
l Tag: Tag
Aware
Issue 03 (2012-11-30)
504
NE
LSR
ID
13 MPLS PW APS
Port
Port Attribute
1-N1PEG8-1(PORT-1)
IP
Addre
ss/
Subn
et
Mask
of
Port
Functio
n
10.0.3.
2/255.
255.25
5.0
Interface
that
carries
the
protectio
n PWs
(MSPW33_2
and MSPW44_2)
Table 13-9 lists the parameter planning for tunnels carrying the PWs.
Table 13-9 Parameter planning for tunnels carrying PWs
Issue 03 (2012-11-30)
Paramet
er
Tunnel That
Carries the
Working PW
Between NE1
and NE2
Tunnel That
Carries the
Protection PW
Between NE1
and NE2
Tunnel That
Carries the
Working PW
Between NE2
and NE3
Tunnel That
Carries the
Protection PW
Between NE2
and NE3
Tunnel
ID
Tunnel
Name
NE1-NE2
working
NE1-NE2
protection
NE2-NE3
working
NE2-NE3
protection
Signaling
Type
Static
Static
Static
Static
Scheduli
ng Type
E-LSP
E-LSP
E-LSP
E-LSP
Bandwid
th
No restriction
No restriction
No restriction
No restriction
Advance
d
Attribute
of
Ingress
Node
NE1
NNI: 3N1PEG8-1
(PORT-1)
NE1
NNI: 1N1PEG8-1
(PORT-1)
NE2
NNI: 3N1PEG8-2
(PORT-2)
NE2
NNI: 1N1PEG8-2
(PORT-2)
505
13 MPLS PW APS
Paramet
er
Tunnel That
Carries the
Working PW
Between NE1
and NE2
Tunnel That
Carries the
Protection PW
Between NE1
and NE2
Tunnel That
Carries the
Working PW
Between NE2
and NE3
Tunnel That
Carries the
Protection PW
Between NE2
and NE3
Advance
d
Attribute
of Egress
Node
NE2
NNI: 3N1PEG8-1
(PORT-1)
NE2
NNI: 1N1PEG8-1
(PORT-1)
NE3
NNI: 3N1PEG8-1
(PORT-1)
NE3
NNI: 1N1PEG8-1
(PORT-1)
Table 13-10 lists the parameter planning for the E-Line services.
Table 13-10 Parameter planning for E-Line services
Parameter
Service ID
Service
Name
E-Line-1
E-Line-2
Direction
UNI-NNI
UNI-NNI
BPDU
Source Port
l NE1: 21-N1PETF8-1(PORT-1)
l NE1: 21-N1PETF8-2(PORT-2)
l NE3: 3-N1PEG8-1(PORT-1)
l NE3: 1-N1PEG8-1(PORT-1)
Source
VLANs
100
200
Bearer Type
PW
PW
Protection
Type
PW APS
Issue 03 (2012-11-30)
Param
eter
MS-PW 11
MS-PW 33
MS-PW 22
MS-PW 44
MSPW ID
11
33
22
44
Name
Working PW of ELine-1
Protection PW of
E-Line-1
Working PW of ELine-2
Protection PW of
E-Line-2
506
13 MPLS PW APS
Param
eter
MS-PW 11
MS-PW 33
MS-PW 22
MS-PW 44
Forwar
d/
Backw
ard PW
Forwar
d PW
(workin
g PW
from
NE1 to
NE2)
Backw
ard PW
(worki
ng PW
from
NE2 to
NE3)
Forwar
d PW
(protect
ion PW
from
NE1 to
NE2)
Backw
ard PW
(protec
tion
PW
from
NE2 to
NE3)
Forwar
d PW
(worki
ng PW
from
NE1 to
NE2)
Backwa
rd PW
(workin
g PW
from
NE2 to
NE3)
Forwar
d PW
(protect
ion PW
from
NE1 to
NE2)
Backw
ard PW
(protect
ion PW
from
NE2 to
NE3)
PW ID
PW
Signali
ng
Type
Static
Static
Static
Static
Static
Static
Static
Static
PW
Type
Etherne
t
Ethern
et
Etherne
t
Ethern
et
Etherne
t
Etherne
t
Etherne
t
Etherne
t
PW
Directi
on
Bidirect
ional
Bidirec
tional
Bidirect
ional
Bidirec
tional
Bidirec
tional
Bidirect
ional
Bidirec
tional
Bidirec
tional
PW
Incomi
ng
Label
16
20
30
40
50
60
70
80
PW
Outgoi
ng
Label
16
20
30
40
50
60
70
80
Tunnel
No.
1 (NE1NE2W
orking)
2
(NE2NE3W
orking)
3 (NE1NE2Pro
tection)
4
(NE2NE3Pr
otectio
n)
1 (NE1NE2W
orking)
2 (NE2NE3Wo
rking)
3 (NE1NE2Pr
otectio
n)
4 (NE2NE3Pr
otectio
n)
Peer
LSR ID
1.0.0.1
1.0.0.3
1.0.0.1
1.0.0.3
1.0.0.1
1.0.0.3
1.0.0.1
1.0.0.3
Table 13-12 lists the parameter planning for the PW APS protection groups.
Table 13-12 Parameter planning for PW APS protection groups
Issue 03 (2012-11-30)
Parameter
Protection Type
PW APS
507
13 MPLS PW APS
Parameter
Protection Group ID
Working PW ID
NE1: 1
NE1: 5
NE3: 2
NE3: 6
NE1: 3
NE1: 7
NE3: 4
NE3: 8
Protection PW ID
Prerequisites
l
The E-Line service interface must be configured. For details on the configuration method,
see "Configuring an Ethernet Port" in the Configuration Guide (Packet Transport Plane).
The Layer 3 NNI must be configured. For details on the configuration method, see
"Configuring an Ethernet Port" in the Configuration Guide (Packet Transport Plane).
The static MPLS tunnel must be created. For details on how to configure a static MPLS
tunnel in an end-to-end manner, see "Configuring an MPLS Tunnel in an End-to-End
Mode" in the Configuration Guide (Packet Transport Plane); for details on how to
configure a static MPLS tunnel on a per-NE basis, see "Configuring an MPLS Tunnel on
a Per-NE Basis" in the Configuration Guide (Packet Transport Plane).
Context
For details on the parameter planning for the E-Line-1 service and the PW APS protection group
involved, see 13.9.1 Description of the Example.
This example focuses on the configuration method and parameters of a PW APS protection
group. For details on how to configure E-Line services, see "E-Line Services Carried by PWs"
in the Configuration Guide (Packet Transport Plane).
Procedure
Step 1 In the NE Explorer of NE1, create the E-Line-1 service from NE1 to NE3 and its PW APS
protection group. For details, see 13.8.1 Configuring PW APS Protection Groups.
1.
Create an E-Line service, and set the relevant parameters according to service planning.
For details, see Step 1 to Step 3 in "13.8.1 Configuring PW APS Protection Groups."
Set the parameters as follows:
l Set Service ID to 1.
l Set Service Name to E-Line-1.
l Set Direction to UNI-NNI.
l Set Source Port to 21-N1PETF8-1(PORT-1).
Issue 03 (2012-11-30)
508
13 MPLS PW APS
Configure PWs, and set the basic parameters of the working PW and protection PW. For
details, see Step 4 to Step 5 in "13.8.1 Configuring PW APS Protection Groups." Set
the parameters as follows:
l Set PW ID (working PW) to 1.
l Set PW ID (protection PW) to 3.
l Set PW Incoming Label/Source Port (working PW) to 16.
l Set PW Incoming Label/Source Port (working PW) to 16.
l Set PW Incoming Label/Source Port (protection PW) to 30.
l Set PW Outgoing Label/Source Port (protection PW) to 30.
l Set Tunnel No. (working PW) to 1(NE1-NE2Working).
l Set Tunnel No. (protection PW) to 3(NE1-NE2Protection).
l Set Peer LSR ID (working PW) to 1.0.0.2.
l Set Peer LSR ID (protection PW) to 1.0.0.2.
3.
Optional: In the QoS tab, set the parameters of QoS according to network planning. This
example does not involve QoS planning.
4.
In the Advanced Attributes tab, the parameters take the default values.
5.
In the Protection Group tab, set the parameters such as Protection Group ID and
Enabling Status. For details, see Step 6 in "13.8.1 Configuring PW APS Protection
Groups." Set the parameters as follows:
l Set Protection Group ID to 1.
l Set Enabling Status to Disabled.
l Set Protection Type to 1:1.
l Set Revertive Mode to Non-Revertive.
6.
Step 2 In the NE Explorer of NE3, create the E-Line-1 service from NE3 to NE1 and its PW APS
protection group. For details, see 13.8.1 Configuring PW APS Protection Groups.
1.
Create an E-Line service, and set the relevant parameters according to service planning.
For details, see Step 1 to Step 3 in "13.8.1 Configuring PW APS Protection Groups."
Set the parameters as follows:
l Set Service ID to 1.
l Set Service Name to E-Line-1.
l Set Direction to UNI-NNI.
l Set Source Port to 3-N1PEG8-1(PORT-1).
l Set Source VLANs to 100.
l Set Bearer Type to PW.
l Set Protection Type to PW APS.
2.
Issue 03 (2012-11-30)
Configure PWs, and set the basic parameters of the working PW and protection PW. For
details, see Step 4 to Step 5 in "13.8.1 Configuring PW APS Protection Groups." Set
the parameters as follows:
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
509
13 MPLS PW APS
Optional: In the QoS tab, set the parameters of QoS according to network planning. This
example does not involve QoS planning.
4.
In the Advanced Attributes tab, the parameters take the default values.
5.
In the Protection Group tab, set the parameters such as Protection Group ID and
Enabling Status. For details, see Step 6 in "13.8.1 Configuring PW APS Protection
Groups." Set the parameters as follows:
l Set Protection Group ID to 1.
l Set Enabling Status to Disabled.
l Set Protection Type to 1:1.
l Set Revertive Mode to Non-Revertive.
6.
Step 3 In the NE Explorer of NE2, create MS-PW 11 (working) and MS-PW 33 (protection) for ELine-1.
1.
2.
3.
In the Create MS PW dialog box that is displayed, set the parameters of the working PW
of E-Line-1.
Issue 03 (2012-11-30)
510
13 MPLS PW APS
l Set ID to 11.
l Set Name to E-Line-1WorkingPW.
l Set Service Type to Ethernet Service.
l Set PW ID (forward PW) to 1. (The forward PW is the working PW1 from NE1 to
NE2.)
l Set PW ID (backward PW) to 2. (The backward PW is the working PW2 from NE2 to
NE3.)
l Set PW Signaling Type to Static. (In this example, the PW egress/ingress label is
manually configured.)
l Set PW Type to Ethernet.
l Set PW Incoming Label/Source Port (forward PW) to 16.
l Set PW Egress Label/Source Port (forward PW) to 16.
l Set PW Incoming Label/Source Port (backward PW) to 20.
l Set PW Egress Label/Source Port (backward PW) to 20.
l Set Tunnel Selection Mode to Manually. (In this example, the tunnel carrying PWs is
manually selected.)
l Set Tunnel No. (forward PW) to 1(NE1-NE2Working). (The tunnel carrying the
forward PW is MPLS tunnel 1 from NE1 to NE2.)
l Set Tunnel No. (backward PW) to 2(NE2-NE3Working). (The tunnel carrying the
backward PW is MPLS tunnel 2 from NE2 to NE3.)
l Set Peer LSR ID (forward PW) to 1.0.0.1. (The forward PW is the PW from NE1 to
NE2, and the Peer LSR ID is the LSR ID of NE1.)
Issue 03 (2012-11-30)
511
13 MPLS PW APS
l Set Peer LSR ID (backward PW) to 1.0.0.3. (The backward PW is the PW from NE2
to NE3, and the Peer LSR ID is the LSR ID of NE3.)
4.
Optional: In the QoS tab, set the parameters of QoS according to network planning. This
example does not involve QoS planning.
5.
In the Advanced Attributes tab, the parameters take the default values.
6.
7.
Perform the preceding operations to create the protection PW, that is, MS-PW 33.
l Set ID to 33.
l Set Name to E-Line-1ProtectionPW.
l Set Service Type to Ethernet Service.
l Set PW ID (forward PW) to 3.
l Set PW ID (backward PW) to 4.
l Set PW Signaling Type to Static. (In this example, the PW egress/ingress label is
manually configured.)
l Set PW Type to Ethernet.
l Set PW Incoming Label/Source Port (forward PW) to 30.
l Set PW Egress Label/Source Port (forward PW) to 30.
l Set PW Incoming Label/Source Port (backward PW) to 40.
l Set PW Egress Label/Source Port (backward PW) to 40.
l Set Tunnel Selection Mode to Manually. (In this example, the tunnel carrying PWs is
manually selected.)
l Set Tunnel No. (forward PW) to 3(NE1-NE2Protection).
l Set Tunnel No. (backward PW) to 4(NE2-NE3Protection).
l Set Peer LSR ID (forward PW) to 1.0.0.1. (The forward PW is the PW from NE1 to
NE2, and the Peer LSR ID is the LSR ID of NE1.)
l Set Peer LSR ID (backward PW) to 1.0.0.3. (The backward PW is the PW from NE2
to NE3, and the Peer LSR ID is the LSR ID of NE3.)
Step 4 In the NE Explorer of NE1 and NE3, start the APS protocol. For details, see 13.8.3 Starting the
APS Protocol.
Step 5 Set Detection Packet Type to FFD and Detection Packet Period(ms) to 3.3 ms for PW OAM
of the working PW and protection PW. For details, see 7.9.2 Setting the Parameters of PW
OAM.
----End
Prerequisites
l
Issue 03 (2012-11-30)
The E-Line service interface must be configured. For details on the configuration method,
see "Configuring an Ethernet Port" in the Configuration Guide (Packet Transport Plane).
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
512
13 MPLS PW APS
The Layer 3 NNI must be configured. For details on the configuration method, see
"Configuring an Ethernet Port" in the Configuration Guide (Packet Transport Plane).
The static MPLS tunnel must be created. For details on how to configure a static MPLS
tunnel in an end-to-end manner, see "Configuring an MPLS Tunnel in an End-to-End
Mode" in the Configuration Guide (Packet Transport Plane); for details on how to
configure a static MPLS tunnel on a per-NE basis, see "Configuring an MPLS Tunnel on
a Per-NE Basis" in the Configuration Guide (Packet Transport Plane).
Context
For details on the parameter planning for the E-Line-2 service and PW APS slave protection
pair, see 13.9.1 Description of the Example.
This example focuses on the configuration method and parameters of a PW APS slave protection
pair. For details on how to configure E-Line services, see "E-Line Services Carried by PWs" in
the Configuration Guide (Packet Transport Plane).
Procedure
Step 1 In the NE Explorer of NE1, create the E-Line-2 service from NE1 to NE3 and its PW APS
protection group. For details, see 13.8.2 Configuring Slave Protection Pairs of PW APS.
1.
Create an E-Line service, and set the relevant parameters according to service planning.
For details, see Step 1 to Step 3 in "13.8.2 Configuring Slave Protection Pairs of PW
APS." Set the parameters as follows:
l Set Service ID to 2.
l Set Service Name to E-Line-2.
l Set Direction to UNI-NNI.
l Set Source Port to 21-N1PETF8-2(PORT-2).
l Set Source VLANs to 200.
l Set Bearer Type to PW.
l Set Protection Type to Slave Protection Pair.
2.
Configure PWs, and set the basic parameters of the working PW and protection PW. For
details, see Step 4 to Step 5 in "13.8.2 Configuring Slave Protection Pairs of PW
APS." Set the parameters as follows:
l Set PW ID (working PW) to 5.
l Set PW ID (protection PW) to 7.
l Set PW Incoming Label/Source Port (working PW) to 50.
l Set PW Incoming Label/Source Port (working PW) to 50.
l Set PW Incoming Label/Source Port (protection PW) to 70.
l Set PW Outgoing Label/Sink Port (protection PW) to 70.
l Set Tunnel No. (working PW) to 1(NE1-NE2Working).
l Set Tunnel No. (protection PW) to 3(NE1-NE2Protection).
l Set Peer LSR ID (working PW) to 1.0.0.2.
l Set Peer LSR ID (protection PW) to 1.0.0.2.
Issue 03 (2012-11-30)
513
13 MPLS PW APS
3.
Optional: In the QoS tab, set the parameters of QoS according to network planning. This
example does not involve QoS planning.
4.
In the Advanced Attributes tab, the parameters take the default values.
5.
In the Protection Group tab, set the relevant parameters. For details, see Step 6 in "13.8.2
Configuring Slave Protection Pairs of PW APS." Set the parameters as follows:
l Set Protection Type to Slave Protection Pair.
l Set Protection Group ID to 1. (The working PW and protection PW of E-Line-2 share
the same source and sink as the created PW APS protection group APS1.)
6.
Step 2 In the NE Explorer of NE3, create the E-Line-2 service from NE3 to NE1 and its PW APS slave
protection pair. For details, see 13.8.2 Configuring Slave Protection Pairs of PW APS.
1.
Create an E-Line service, and set the relevant parameters according to service planning.
For details, see Step 1 to Step 3 in "13.8.2 Configuring Slave Protection Pairs of PW
APS." Set the parameters as follows:
l Set Service ID to 2.
l Set Service Name to E-Line-2.
l Set Direction to UNI-NNI.
l Set Source Port to 1-N1PEG8-1(PORT-1).
l Set Source VLANs to 200.
l Set Bearer Type to PW.
l Set Protection Type to Slave Protection Pair.
2.
Configure PWs, and set the basic parameters of the working PW and protection PW. For
details, see Step 4 to Step 5 in "13.8.2 Configuring Slave Protection Pairs of PW
APS." Set the parameters as follows:
l Set PW ID (working PW) to 6.
l Set PW ID (protection PW) to 8.
l Set PW Incoming Label/Source Port (working PW) to 60.
l Set PW Incoming Label/Source Port (working PW) to 60.
l Set PW Incoming Label/Source Port (protection PW) to 80.
l Set PW Outgoing Label/Sink Port (protection PW) to 80.
l Set Tunnel No. (working PW) to 2(NE2-NE3Working).
l Set Tunnel No. (protection PW) to 4(NE2-NE3Protection).
l Set Peer LSR ID (working PW) to 1.0.0.2.
l Set Peer LSR ID (protection PW) to 1.0.0.2.
3.
Optional: In the QoS tab, set the parameters of QoS according to network planning. This
example does not involve QoS planning.
4.
In the Advanced Attributes tab, the parameters take the default values.
5.
In the Protection Group tab, set the relevant parameters. For details, see Step 6 in "13.8.2
Configuring Slave Protection Pairs of PW APS." Set the parameters as follows:
l Set Protection Type to Slave Protection Pair.
l Set Protection Group ID to 1. (The working PW and protection PW of E-Line-2 share
the same source and sink as the created PW APS protection group APS1.)
6.
Issue 03 (2012-11-30)
514
13 MPLS PW APS
Step 3 In the NE Explorer of NE2, create MS-PW 22 (working) and MS-PW 44 (protection) for ELine-2.
1.
2.
3.
4.
Optional: In the QoS tab, set the parameters of QoS according to network planning. This
example does not involve QoS planning.
5.
In the Advanced Attributes tab, the parameters take the default values.
6.
7.
Perform the preceding operations to create the protection PW, that is, MS-PW 44.
l Set ID to 44.
l Set Name to E-Line-2ProtectionPW.
l Set Service Type to Ethernet Service.
l Set PW ID (forward PW) to 7.
l Set PW ID (backward PW) to 8.
l Set PW Signaling Type to Static. (In this example, the PW egress/ingress label is
manually configured.)
l Set PW Type to Ethernet.
Issue 03 (2012-11-30)
515
13 MPLS PW APS
Prerequisites
l
The PW APS protection group must be created and the APS protocol must be started.
Procedure
Step 1 Manually perform PW APS switching, and query the switchover status. For details, see 13.8.4
Performing External Switching of PW APS.
NOTE
When the working PW and protection PW work properly, the status is Available. When the PWs fail to
work properly, the status is Unavailable. In this case, you need handle the fault. Double-click PW ID. In
the PW Management dialog box that is displayed, check the PW running status, and rectify the fault
according to the PW status.
Step 2 Choose Configuration > Packet Configuration > Ethernet Service Management > E-Line
Service from the Function Tree.
Step 3 Select the service in the protection group, and choose Browse Current Alarms from the shortcut
menu. Check whether any service alarm is reported after protection switching.
l
If the service alarm is reported, see the Alarms And Performance Events Reference to clear
the alarm.
Step 4 Check the service connectivity by means of the ETH-OAM function. For details, see 7.9.2
Setting the Parameters of PW OAM.
l
Issue 03 (2012-11-30)
If the service is normal, it indicates that the PW APS protection group is configured
correctly.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
516
13 MPLS PW APS
----End
Follow-up Procedure
After the verification is completed, perform Manual Switching to Working to switch the
service back to the working PW.
E-Line services are configured for NE1, NE2, and NE3 for transmitting data services
between the NodeB and RNC.
PW OAM are enabled on NE1, NE2, and NE3 for NNI service fault detection.
NE2 and NE3 are IWF-enabled, that is, Associate with AC Status for PW OAM is set to
enabled. Fault information on the UNI side is transparently transmitted to NE1, triggering
PW FPS switching and realizing UNI service protection.
Issue 03 (2012-11-30)
517
13 MPLS PW APS
UNI side
3-N1PEG8-1
(PORT-1)
21-N1PETF8-1
(PORT-1)
IWF
1-N1PEG8-1
(PORT-1)
3-N1PEG8-1
(PORT-1)
A
PSN
NE2
PW FPS
NodeB
NE1
IWF
3-N1PEG8-2
(PORT-2)
S
3-N1PEG8-1
(PORT-1)
RNC
1-N1PEG8-1
(PORT-1)
NE3
Working PW
Protection PW
A
As shown in Figure 13-14, one E-Line service, E-Line-1, is transmitted on the trails
NodeBNE1NE2RNC and NE3RNC.
Figure 13-14 PW FPS planning
E-Line-1
E-Line-1
PW 1
A
NE2
PW FPS
NodeB
NE1
E-Line-1
S
PW 2
RNC
NE3
E-Line service
Working PW
Protection PW
A
S
518
13 MPLS PW APS
PW
MPLS Tunnel
Protection Attribute
E-Line-1
PW FPS protection
group
Service Planning
NE1, NE2, and NE3 are OptiX OSN 3500s in this example. Table 13-14 lists the NE parameter
planning.
Table 13-14 NE parameter planning
NE
LSR
ID
Port
Port Attribute
IP
Addre
ss/
Subn
et
Mask
of
Port
Functio
n
NE1
1.0.0.1
21-N1PETF8-1(PORT-1)
UNI
Interface
of ELine-1 on
the
NodeB
side
10.0.0.
1/255.
255.25
5.252
Interface
that
carries
working
PW1
10.0.1.
1/255.
255.25
5.252
Interface
that
carries
protectio
n PW2
Interface
of ELine-1 on
the RNC
side
l Port mode:
Layer 2
l Tag: Tag
Aware
3-N1PEG8-1(PORT-1)
NNI
l Port mode:
Layer 3
l Tag: Tag
Aware
3-N1PEG8-2(PORT-2)
NE2
1.0.0.2
1-N1PEG8-1(PORT-1)
UNI
l Port mode:
Layer 2
l Tag: Tag
Aware
Issue 03 (2012-11-30)
519
NE
LSR
ID
13 MPLS PW APS
Port
Port Attribute
IP
Addre
ss/
Subn
et
Mask
of
Port
Functio
n
3-N1PEG8-1(PORT-1)
NNI
10.0.0.
2/255.
255.25
5.252
Interface
that
carries
working
PW1
Interface
of ELine-1 on
the RNC
side
10.0.1.
2/255.
255.25
5.252
Interface
that
carries
protectio
n PW2
l Port mode:
Layer 3
l Tag: Tag
Aware
NE3
1.0.0.3
1-N1PEG8-1(PORT-1)
UNI
l Port mode:
Layer 2
l Tag: Tag
Aware
3-N1PEG8-1(PORT-1)
NNI
l Port mode:
Layer 3
l Tag: Tag
Aware
Table 13-15 lists the parameter planning for MPLS tunnels carrying PWs.
Table 13-15 Parameter planning for MPLS tunnels carrying PWs
Issue 03 (2012-11-30)
Parameter
Tunnel ID
Tunnel Name
NE1-NE2(Tunnel1)
NE1-NE3(Tunnel2)
Signaling Type
Static CR
Static CR
Scheduling Type
E-LSP
E-LSP
Bandwidth
No restriction
No restriction
Advanced
Attribute of Ingress
Node
NE1
Out Interface: 3-N1PEG8-1
(PORT-1)
NE1
Out Interface: 3-N1PEG8-2
(PORT-2)
Outgoing Label: 99
520
13 MPLS PW APS
Parameter
Advanced
Attribute of Egress
Node
NE2
Out Interface: 3-N1PEG8-1
(PORT-1)
NE3
Out Interface: 3-N1PEG8-1
(PORT-1)
Incoming Label: 99
Table 13-16 lists the parameter planning for the E-Line services.
Table 13-16 Parameter planning for E-Line services
Parameter
Service ID
Service Name
E-Line-1
Direction
UNI-NNI
BPDU
Source Port
l NE1: 21-N1PETF8-1(PORT-1)
l NE2: 1-N1PEG8-1(PORT-1)
l NE3: 1-N1PEG8-1(PORT-1)
Source VLANs
100
Bearer Type
PW
Protection Type
NE1: PW FPS
NE2 and NE3: Unprotected
Issue 03 (2012-11-30)
Parameter
PW ID
PW Signaling Type
Static
Static
PW Type
Ethernet
Ethernet
PW Direction
Bidirectional
Bidirectional
521
13 MPLS PW APS
Parameter
PW Encapsulation Type
MPLS
MPLS
PW Incoming Label
16
17
PW Outgoing Label
16
17
Tunnel No.
1(NE1-NE2(Tunnel1))
2(NE1-NE3(Tunnel2))
Peer LSR ID
1.0.0.2
1.0.0.3
Table 13-18 lists the parameter planning for the PW FPS protection groups.
Table 13-18 Parameter planning for PW FPS protection groups
Parameter
Protection Type
PW FPS
Protection Group ID
Enabling Status
Enabled
Working PW ID
Protection PW ID
Revertive Mode
Revertive Mode
Detection Method
OAM
OAM Status
Enabled
NE1: Disabled
NE2 and NE3: Enabled
Issue 03 (2012-11-30)
Detection Mode
Adaptive
FFD
522
13 MPLS PW APS
Parameter
3.3
LSR ID to be Received
NE1
l Working PW: 1.0.0.2
l Protection PW: 1.0.0.3
NE2 and NE3: 10.0.0.1
PW ID to be Received
NE1
l Working PW: 1
l Protection PW: 2
NE2: 1
NE3: 2
Prerequisites
l
You are familiar with the networking, requirements, and service planning of the example.
2.
3.
Create bidirectional tunnels between NE1 and NE2, and between NE1 and NE3 in end-toend mode. For details, see Configuring a Static and Bidirectional MPLS Tunnel in End-toEnd Mode in the Configuration Guide (Packet Transport Domain).
4.
Configure UNI-NNI E-Line services for NE1, NE2, and NE3 in per-NE mode and create
a PW FPS protection group on NE1.
5.
Set OAM Status and Associate with AC Status to Enabled for the PW OAM of NE2 and
NE3.
NOTE
In the following steps, the parameters take default values unless otherwise specified.
Procedure
Step 1 Configure the LSR IDs of NE1, NE2, and NE3. For details, see Configuring LSR ID in the
Configuration Guide (Packet Transport Domain).
l NE1 LSR ID: 1.0.0.1
l NE2 LSR ID: 1.0.0.2
Issue 03 (2012-11-30)
523
13 MPLS PW APS
NE
1
NE
2
NE
3
Port
Basic
Attributes
Layer 3 Attributes
Ena
ble
Port
Port
Mod
e
Enabl
e
Tunne
l
Specif
y IP
IP
Addre
ss
IP Mask
Enab
led
Laye
r2
3-N1PEG8-1
(PORT-1)
Laye
r3
Enable
d
Manual
ly
10.0.0.
1
255.255.255.2
52
3-N1PEG8-2
(PORT-2)
Laye
r3
Enable
d
Manual
ly
10.0.1.
1
255.255.255.2
52
1-N1PEG8-1
(PORT-1)
Laye
r2
3-N1PEG8-1
(PORT-1)
Laye
r3
Enable
d
Manual
ly
10.0.0.
2
255.255.255.2
52
1-N1PEG8-1
(PORT-1)
Laye
r2
3-N1PEG8-1
(PORT-1)
Laye
r3
Enable
d
Manual
ly
10.0.1.
2
255.255.255.2
52
21-N1PETF8-1
(PORT-1)
Step 3 Create bidirectional tunnels between NE1 and NE2, and between NE1 and NE3 in end-to-end
mode. For details, see Configuring a Static and Bidirectional MPLS Tunnel in End-to-End Mode
in the Configuration Guide (Packet Transport Domain).
Issue 03 (2012-11-30)
Parameter
Tunnel ID
Tunnel Name
NE1-NE2(Tunnel1)
NE1-NE3(Tunnel2)
Protocol Type
MPLS
MPLS
Signaling Type
Static CR
Static CR
Service Direction
Bidirectional
Bidirectional
Protection Type
Protection-Free
Protection-Free
Scheduling Type
E-LSP
E-LSP
Bandwidth
No restriction
No restriction
524
13 MPLS PW APS
Parameter
Advanced Attribute
of Ingress Node
NE1
Out Interface: 3-N1PEG8-1
(PORT-1)
NE1
Out Interface: 3-N1PEG8-2
(PORT-2)
Outgoing Label: 99
NE2
Out Interface: 3-N1PEG8-1
(PORT-1)
NE3
Out Interface: 3-N1PEG8-1
(PORT-1)
Incoming Label: 99
Advanced Attribute
of Egress Node
Step 4 Configure UNI-NNI E-Line services for NE1, NE2, and NE3 in per-NE mode and create a PW
FPS protection group on NE1.
1.
Configure UNI-NNI E-Line services on NE1 and create a PW FPS protection group on
NE1. For details, see Step 1 to Step 8 in 13.8.5 Configuring PW FPS Protection
Groups.
Table 13-20 Parameter setting for new E-Line services (NE1)
Parameter
Value
Service ID
Service Name
E-Line-1
Direction
UNI-NNI
BPDU
Source Port
21-N1PETF8-1(PORT-1)
Source VLANs
100
Bearer Type
PW
Protection Type
PW FPS
Issue 03 (2012-11-30)
Parameter
Working PW
Protection PW
PW ID
PW Signaling Type
Static
Static
PW Type
Ethernet
Ethernet
PW Direction
Bidirectional
Bidirectional
525
13 MPLS PW APS
Parameter
Working PW
Protection PW
PW Encapsulation
Type
MPLS
MPLS
PW Incoming Label
16
17
PW Outgoing Label
16
17
Tunnel No.
1(NE1-NE2(Tunnel1))
2(NE1-NE3(Tunnel2))
Peer LSR ID
1.0.0.2
1.0.0.3
Value
Protection Type
PW FPS
Protection Group ID
Enabling Status
Enabled
Working PW ID
Protection PW ID
Revertive Mode
Revertive Mode
Detection Method
OAM
Issue 03 (2012-11-30)
Parameter
Working PW
Protection PW
OAM Status
Enabled
Enabled
Detection Mode
Adaptive
Adaptive
Detection Packet
Type
FFD
FFD
Detection Packet
Period (ms)
3.3
3.3
LSR ID to be
Received
1.0.0.2
1.0.0.3
PW ID to be
Received
526
2.
13 MPLS PW APS
Configure UNI-NNI E-Lines services for NE2 and NE3. For details, see Configuring UNINNI E-Line Services Carried by PWs on a Per-NE Basis in the Configuration Guide (Packet
Transport Domain).
Table 13-24 Parameter settings for new E-Line services (NE2 and NE3)
Parameter
NE2
NE3
Service ID
Service Name
E-Line-1
E-Line-1
Direction
UNI-NNI
UNI-NNI
BPDU
Source Port
1-N1PEG8-1(PORT-1)
1-N1PEG8-1(PORT-1)
Source VLANs
100
100
Bearer Type
PW
PW
Protection Type
Unprotected
Unprotected
Table 13-25 Parameters settings for PW basic attributes (NE2 and NE3)
Parameter
NE2
NE3
PW ID
PW Signaling Type
Static
Static
PW Type
Ethernet
Ethernet
PW Direction
Bidirectional
Bidirectional
PW Encapsulation
Type
MPLS
MPLS
PW Incoming Label
16
17
PW Outgoing Label
16
17
Tunnel No.
1(NE1-NE2(Tunnel1))
2(NE1-NE3(Tunnel2))
Peer LSR ID
1.0.0.2
1.0.0.3
Step 5 Set OAM Status and Associate with AC Status to Enabled for the PW OAM of NE2 and
NE3.For details, see 7.9.1 Enabling PW OAM and 7.9.2 Setting the Parameters of PW
OAM.
Issue 03 (2012-11-30)
527
13 MPLS PW APS
NE2
NE3
OAM Status
Enabled
Enabled
Enabled
Enabled
Detection Mode
Adaptive
Adaptive
FFD
FFD
3.3
3.3
LSR ID to be Received
10.0.0.1
10.0.0.1
PW ID to be Received
----End
Prerequisites
l
Procedure
Step 1 Manually perform PW FPS switching, and query the switchover status. For details, see 13.8.4
Performing External Switching of PW APS.
NOTE
When the working PW and protection PW work properly, the status is Available. When the PWs fail to
work properly, the status is Unavailable. In this case, you need to handle the fault. Double-click PW ID.
In the PW Management dialog box that is displayed, check the PW running status, and rectify the fault
according to the PW status.
Step 2 Choose Configuration > Packet Configuration > Ethernet Service Management > E-Line
Service from the Function Tree.
Step 3 Right-click the service in the protection group, and choose Browse Current Alarms from the
shortcut menu. Check whether any service alarm is reported after protection switching.
l
If any service alarms are reported, refer to the Alarms and Performance Events Reference
to clear the alarm.
Step 4 Check the service connectivity by means of the ETH-OAM function. For details, see 6.3.7
Configuring Ethernet Service OAM.
Issue 03 (2012-11-30)
528
13 MPLS PW APS
If the service is normal, it indicates that the PW FPS protection group is configured
correctly.
----End
Follow-up Procedure
After the verification is complete, perform Manual Switching to Working to switch the service
back to the working PW.
Issue 03 (2012-11-30)
Alarm Name
Meaning
PWAPS_TYPE_MISMATCH
PWAPS_PATH_MISMATCH
PWAPS_SWITCH_FAI
L
PWAPS_LOST
529
13 MPLS PW APS
Value
Description
Protection Group ID
1 to 1024
Specifies the ID of a
protection group.
Issue 03 (2012-11-30)
Protection Type
No Protection, PW APS,
Slave Protection Pair, PW
FPS
Working PW ID
1 to 4294967295
Protection PW ID
1 to 4294967295
530
13 MPLS PW APS
Field
Value
Description
Protection Mode
1+1, 1:1
Switchover Mode
Single-ended switching,
Dual-ended switching
Restoration Mode
Revertive, Non-Revertive
Issue 03 (2012-11-30)
531
13 MPLS PW APS
Field
Value
Description
Switchover Restoration
Time(m)
1 to 12
Default value: 1
0 to 100
Default value: 0
Issue 03 (2012-11-30)
Enabling Status
Enabled, Disabled
Deployment Status
Deployed, Undeployed
532
13 MPLS PW APS
Field
Value
Description
Switching State
Enabled, Disabled
Issue 03 (2012-11-30)
Available, Unavailable
Available, Unavailable
533
13 MPLS PW APS
Value
Description
ID
1 to 4294967295
Name
MTU(byte)
46 to 9000
Service Type
PW ID
1 to 4294967295
PW Signaling
Type
Static
PW Type
Ethernet, Ethernet
Tagged
Issue 03 (2012-11-30)
PW Ingress
Label/Source
Port
16 to 1068575
PW Egress
Label/Sink
Port
16 to 1068575
534
Issue 03 (2012-11-30)
13 MPLS PW APS
Field
Value
Description
Tunnel
Selection
Mode
Tunnel Type
MPLS
Tunnel
For example, 1
Opposite LSR
ID
For example,
10.70.71.123
535
14
536
You can configure a 1+1 or 1:1 MPLS tunnel APS protection group to protect MPLS tunnels.
This task requires you to configure the protection group on the source and sink NEs of the MPLS
tunnel.
14.10 MPLS Tunnel APS Configuration Case
This section describes an MPLS Tunnel APS configuration case. The configuration case covers
service planning and configuration process.
14.11 Verifying the MPLS Tunnel APS
This topic describes how to verify and ensure that the configuration data of the MPLS Tunnel
APS is correct after the MPLS Tunnel APS is configured.
14.12 Troubleshooting
This topic describes the symptom of common faults when the MPLS Tunnel APS function is
enabled and the solution to the relevant fault.
14.13 Relevant Alarms and Performance Events
This topic describes the alarms and performance events that are related to this feature.
14.14 Parameter Description: MPLS Tunnel APS
This topic describes the parameters required for configuring MPLS tunnel APS.
Issue 03 (2012-11-30)
537
At the link layer, the detection is conducted by MPLS OAM in 10 ms. Set Detection Packet
Period to 3.3 ms so that the protection switching time is less than 50 ms.
In MPLS Tunnel APS, detection at the link layer is performed by MPLS OAM. Thus, you must
set the MPLS OAM parameters for relevant tunnels before configuring MPLS Tunnel APS.
APS
The automatic protection switching (APS) protocol is used to coordinate actions of the source
and the sink in the case of bidirectional protection switching. By the APS protocol, the source
and the sink cooperate with each other to perform functions such as protection switching,
switching delay, and WTR function.
According to ITU-T Y.1720, the source and the sink both need to select channels in the APS.
In this case, the APS protocol is required for coordination. In the case of bidirectional protection
switching, the APS protocol needs to be used regardless of the revertive mode.
The APS protocol packet is always transmitted through the protection tunnel. Then, the
equipment at either end knows that the tunnel from which the APS protocol is received is the
protection tunnel of the opposite end and thus to determine whether the configuration about the
working tunnel and the protection tunnel is consistent at the two ends.
When the equipment cannot receive any APS packet, services should be always transmitted and
received from the working tunnel.
Switching Mode
MPLS Tunnel APS provides two switching modes, that is, single-ended switching and dualended switching.
In the case of single-ended switching, when one end detects a fault, it only performs switching
on the local end and does not instruct the opposite end to perform any switching.
Issue 03 (2012-11-30)
538
In the case of dual-ended switching, when one end detects a fault, it performs switching on the
local end and also instructs the opposite end to perform switching.
Single-ended switching does not require the APS protocol for negotiation and it features rapid
and stable switching.
Dual-ended switching ensures that the services are transmitted in a consistent channel, which
facilitates service management.
Revertive Mode
The MPLS Tunnel APS function supports two revertive modes, that is, revertive mode and nonrevertive mode.
In the non-revertive mode, services are not switched from the protection tunnel to the working
tunnel even the working tunnel is restored to the normal state.
In the revertive mode, services are switched from the protection tunnel to the original working
tunnel if the working tunnel is restored to the normal state within the WTR time.
WTR Time
The WTR time refers to the period from the time when the original working tunnel is restored
to the time when the services are switched from the protection tunnel to the original working
tunnel.
In certain scenarios, the state of the working tunnel is unstable. In this case, setting the WTR
time can prevent frequent switching of services between the working tunnel and the protection
tunnel.
Hold-off Time
The hold-off time refers to the period from the time when the equipment detects a fault to the
time when the switching operation is performed.
When the equipment is configured with the MPLS Tunnel APS protection and other protection,
setting the hold-off time can ensure that other protection switching operations are performed
first.
Protocol State
The protocol state indicates whether the APS protocol of the protection group is valid currently.
In the case of configuring the MPLS Tunnel APS protection, the protocol state is disabled by
default. If you enable the APS protocol at the local NE first and then the opposite NE when
configuring the MPLS Tunnel APS protection, the opposite NE may have an anomaly in
receiving services. After the MPLS Tunnel APS protection group is configured at the two ends,
start protocol.
14.3 Specifications
This section describes the capacity of the OptiX OSN equipment for supporting MPLS tunnel
APS.
Table 14-1 provides specifications associated with MPLS tunnel APS.
Issue 03 (2012-11-30)
539
Specifications
Support capability
Switching Type
Revertive Mode
l Revertive
l Non-revertive
l Revertive: 1 minute to 12 minutes, 5 minute by default.
WTR time
APS
Switching Mode
l Clear switching
l Locked switching
l Forced switching
l Automatic switching
l Manual switching
l Exercise switching
Switching Time
50 ms
Switching Condition
(Any of the following
conditions triggers the
switching)
Issue 03 (2012-11-30)
540
14.5 Availability
The MPLS tunnel APS function requires the support of the applicable equipment, boards, and
software.
Version Support
Product Name
Applicable Version
T2000
U2000
Product Name
Applicable Version
U2000
Product Name
Applicable Version
U2000
Hardware Support
Issue 03 (2012-11-30)
Board Type
Applicable Equipment
Version
Applicable Equipment
N1PEG16
N1PEX1
N1PETF8
R1PEFS8
Q1PEGS2
R1PEGS1
N1PEG8
N1PEX2
541
Board Type
Applicable Equipment
Version
Applicable Equipment
N2PEX1
N1PEFF8
R1PEF4F
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
Issue 03 (2012-11-30)
Applicable
Object
Remarks
Tunnel APS
Tunnel APS
Tunnel APS
Tunnel APS
542
Applicable
Object
Remarks
Tunnel APS
Tunnel APS
and PW APS
Tunnel APS
and PW OAM
Tunnel OAM
Issue 03 (2012-11-30)
Applicable
Object
Remarks
Tunnel APS
Tunnel APS
Tunnel APS
and LPT
543
Maintenance Principles
None.
14.7 Principles
This topic describes the functions of MPLS Tunnel APS in terms of the basic principles,
protection attributes, protection rules, and switching impact.
Before Switching
l
The ingress node and egress node send service packets over the working and protection
tunnels.
The ingress node and egress node receive service packets over the working tunnel.
During Switching
Figure 14-1 shows the principle of the single-ended switching, assuming a fault in the forward
working tunnel.
Figure 14-1 Principle of the 1+1 single-ended switching
Ingress
Egress
Ingress
Forward
working tunnel
Forward
protection tunnel
Reverse
working tunnel
Forward
working tunnel
Forward
protection tunnel
Switching
Reverse
protection tunnel
Service
Egress
Reverse
working tunnel
Reverse
protection tunnel
When the egress node detects a switching condition, it receives service packets over the
forward protection tunnel, instead of the forward working tunnel.
2.
The states of the reverse working and protection tunnels remain unchanged.
Issue 03 (2012-11-30)
544
After Switching
If tunnel APS 1+1 single-ended switching is in revertive mode, the service in the protection
tunnel is switched back to the normal working tunnel after the WTR time elapses.
Before Switching
l
The ingress and egress nodes exchange APS protocol packets over the protection tunnel,
and then they are aware of the status of each other. When the working tunnel is found faulty,
the ingress and egress nodes can perform the protection switching, switching hold-off, and
wait-to-restore (WTR) functions. In this case, the request state of the APS protocol packet
should be No Request.
The MPLS OAM mechanism is used to perform unidirectional continuity checks on all the
tunnels.
During Switching
Figure 14-2 shows the principle of the dual-ended switching, assuming a fault in the forward
working tunnel.
Figure 14-2 Principle of the dual-ended switching
Ingress
Egress
Ingress
Forward
working tunnel
Forward
protection tunnel
Reverse
working tunnel
Reverse
protection
tunnel
Service
Forward
working tunnel
Egress
Forward
protection tunnel
Switching
Reverse
working tunnel
Reverse
protection
tunnel
When the egress node detects a fault in the forward working tunnel, it is switched to the
forward protection tunnel and is bridged to the reverse protection tunnel at the same time.
l The egress node receives the service from the forward protection tunnel instead of the
forward working tunnel. In addition, the egress node sends the APS protocol packet
carrying a bridging request to the ingress node.
l The egress node modifies the MPLS tunnel that the FEC travels through. That is, the
tunnel that the FEC travels through is changed from the reverse working tunnel to the
Issue 03 (2012-11-30)
545
reverse protection tunnel. In this case, the packet in the FEC encapsulates the MPLS
label corresponding to the reverse protection tunnel so that the service can be bridged
to the reverse protection tunnel. Meanwhile, the egress node sends the APS protocol
packet carrying a switching request to the ingress node.
NOTE
l "Bridging" means that the equipment transmits the service to the protection tunnel instead of the
working tunnel.
l "Switching" means that the equipment receives the service from the protection tunnel instead of the
working tunnel.
2.
On the reception of the APS protocol packet carrying a switching request, the ingress node
performs the following operations:
l The ingress node modifies the MPLS tunnel that the FEC travels through. That is, the
tunnel that the FEC travels through is changed from the forward working tunnel to the
forward protection tunnel. In this case, the packet in the FEC encapsulates the MPLS
label corresponding to the forward protection tunnel so that the service can be bridged
to the forward protection tunnel.
l The ingress node receives the service from the reverse protection tunnel instead of the
reverse working tunnel.
3.
After Switching
If MPLS tunnel APS 1:1 dual-ended switching is in revertive mode, the service is switched back
to the normal forward and reverse working tunnels after the WTR time elapses.
546
Idle: It represents the normal status of a node after the protocol is activated.
Switch: It represents the status of the faulty node after the switching.
Wtr: It represents the status of the original switching node when the fault is rectified within
the WTR time.
Lockout: It represents the status when the lockout switching command is issued and
services in the tunnel are switched to the working tunnel.
Automatic Switching
Automatic switching is triggered when a certain alarm, for example, MPLS_TUNNEL_LOCV,
MPLS_TUNNEL_MISMATCH, MPLS_TUNNEL_MISMERGE,
MPLS_TUNNEL_EXCESS, or MPLS_TUNNEL_UNKOWN, is detected.
External Switching
External switching is performed when the user issues switching commands that trigger the
switching condition. Six external switching commands are as follows:
l
Clear switching: This command clears all external commands and middle status, and returns
the working and protection tunnels to the idle or automatic switching state accordingly.
Forced switching: Services are forcibly switched regardless of whatever the status of a
tunnel is.
Manual to Working: Services are switched from the protection tunnel to the working tunnel.
If the working tunnel is normal, the switching takes place. If the working tunnel is faulty
or must satisfy higher priority switching, the switching does not take place.
Manual to Protection: Services are switched from the working tunnel to the protection
tunnel. If the protection tunnel is normal, the switching takes place. If the protection tunnel
is faulty or must satisfy higher priority switching, the switching does not take place.
Exercise switching: Simulates a switching operation. It is not a real switching. This function
tests the MPLS Tunnel APS protection function and does not interrupt services.
Lockout of protection: Regardless of the switching state, services are locked over the
working MPLS Tunnel.
Switching Rules
Each switching observes the following rules:
l
Issue 03 (2012-11-30)
In a descending order of priority, the switching commands are: clear switching, lockout of
switching, forced switching, automatic switching, manual switching, and exercise
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
547
switching. Commands of higher priority can preempt commands of lower priority, while
the opposite is impossible.
l
In the revertive mode, the commands of forced switching to the active tunnel and manual
switching to the active tunnel are not supported.
Working Tunnel
NE B
Protection Tunnel
There are two MPLS tunnels as shown in the Figure 14-3. Normally, services are transmitted
through the working tunnel, and the protection tunnel is used to transmit the APS protocol.
MPLS OAM performs the connectivity check for each unidirectional MPLS tunnel. The source
sends connectivity check packets periodically and the sink performs the check. Set the OAM
packet type to FFD and the sending period to 3.3 ms to ensure that the switching time is less
than 50 ms.
The switching mode of the MPLS Tunnel APS 1+1 protection includes single-ended and dualended switching.
Issue 03 (2012-11-30)
548
NE B
Protection Tunnel
There are two MPLS tunnels as shown in the Figure 14-4. The continuous line indicates the
working tunnel and the dashed line indicates the protection tunnel. Normally, services are
transmitted through the working tunnel, and the protection tunnel is used to transmit the APS
protocol.
MPLS OAM performs the connectivity check for each unidirectional MPLS tunnel. The source
sends connectivity check packets periodically and the sink performs the check. Set the OAM
packet type to FFD and the sending period to 3.3 ms to ensure that the switching time is less
than 50 ms.
The switching mode of the MPLS Tunnel APS 1:1 protection is dual-ended switching.
Enable MPLS OAM of the tunnels before configuring the APS protection.
Set the MPLS OAM packet type to FFD and the sending period to 3.3 ms.
Set the APS protocol to the enabled state after the nodes at both ends are configured with
the APS protection group.
Prerequisites
l
Issue 03 (2012-11-30)
549
MPLS OAM of each MPLS tunnel in the protection group must be enabled.
Precautions
WARNING
l An NNI that is configured with the 1+1 MPLS tunnel protection cannot be configured into
a link aggregation group (LAG).
l The protection MPLS tunnel cannot carry any extra services.
Procedure
Step 1 In the NE Explorer, select the source NE of the MPLS tunnel and choose Configuration >
Packet Configuration > APS Protection Management from the Function Tree.
Step 2 Click New.
Step 3 In the New Tunnel Protection Group dialog box that is displayed, set the parameters for the
MPLS tunnel protection group and select the working and protection MPLS tunnels.
550
CAUTION
When creating a protection group, set Protocol Status to Disabled. Enable the protocol only
after the configuration of the protection group is completed on the source and sink NEs.
Step 4 Click OK. The MPLS tunnel APS protection group is configured successfully.
NOTE
The bandwidth of the protection MPLS tunnel must be higher than the bandwidth of the working MPLS
tunnel. To increase the bandwidth of the working MPLS tunnel after the protection group is created, increase
the bandwidth of the protection MPLS tunnel first.
Step 5 Repeat Steps 1-4 to configure the protection group on the sink NE.
Step 6 Enable the protocol of the MPLS tunnel APS protection group.
1.
In the NE Explorer, select the source NE of the MPLS tunnel and choose Configuration
> Packet Configuration > APS Protection Management from the Function Tree.
2.
Right-click a created protection group and choose Start Protocol from the shortcut menu.
3.
A dialog box is displayed, indicating that the operation is successful. Protocol Status of
the protection group changes to Enabled.
----End
Follow-up Procedure
1.
2.
551
NE 1
NE 4
PSN
Node B
RNC
NE 3
Protection Tunnel
Working Tunnel
Service Planning
In this case, the NEs are OptiX OSN 3500 equipment. NE1 and NE4 are connected to Node B
and RNC respectively through E1 interfaces. NEs are connected to each other through FE
interfaces.
NE
Working Tunnel
Protection Tunnel
NE1
Ingress Tunnel
ID
Egress Tunnel
ID
Ingress Tunnel
ID
Egress Tunnel
ID
Transit Tunnel
ID
Transit Tunnel
ID
Transit Tunnel
ID
Transit Tunnel
ID
Ingress Tunnel
ID
Egress Tunnel
ID
Ingress Tunnel
ID
Egress Tunnel
ID
NE2
NE3
NE4
Issue 03 (2012-11-30)
552
Two MPLS tunnels are created on NE1. The working tunnel is MPLS tunnel 1 whose tunnel ID
is 1 in the ingress direction and 2 in the egress direction. The protection tunnel is MPLS tunnel
2 whose tunnel ID is 3 in the ingress direction and 4 in the egress direction.
An MPLS tunnel is created on NE2 as a transmit tunnel. The positive tunnel ID is 1 and the
reverse tunnel ID is 2.
An MPLS tunnel is created on NE3 as a transmit tunnel. The positive tunnel ID is 3 and the
reverse tunnel ID is 4.
Two MPLS tunnels are created on NE 4. The working tunnel is MPLS tunnel 1 whose tunnel
ID is 1 in the ingress direction and 2 in the egress direction. The protection tunnel is MPLS
tunnel 2 whose tunnel ID is 3 in the ingress direction and 4 in the egress direction.
According to the MPLS Tunnel APS configuration principle, you must set the OAM parameters
for the two MPLS tunnels and enable the OAM function in the ingress direction of the two
tunnels before configuring the MPLS Tunnel APS protection group. Set the OAM packet type
to FFD and packet sending period to 3.3 ms.
Prerequisites
You must have learned about the networking, requirements, and service planning.
Two tunnels for the MPLS Tunnel APS protection must be created.
Procedure
Step 1 Enable the MPLS OAM function for the two MPLS tunnels and set the OAM packet parameters.
1.
Click the NE in the NE Explorer. Choose Configuration > Packet Configuration > MPLS
Management > Unicast Tunnel Management from the Function Tree.
2.
Click the OAM Parameters tab. Set OAM Status to Enabled for the tunnels whose tunnel
IDs are 1, 2, 3, and 4.
3.
Set Detection Packet Type to FFD and Detection Packet Period(ms) to 3.3 for the tunnels
whose tunnel IDs are 1 and 3.
4.
Click Apply. The Operation Result dialog box is displayed indicating that the operation
is successful.
5.
Click Close.
6.
Set Detection Packet Type to FFD and Detection Packet Period(ms) to 3.3 for the tunnels
with 2 and 4 as tunnel IDs on NE2. Take preceding steps as reference.
Select the NE in the NE Explorer. Choose Configuration > Packet Configuration > APS
Protection Management.
2.
Click New to display the New Tunnel Protection Group dialog box. Set the parameter as
follows:
l Set Protection Type to 1:1.
l Set Switching Mode to Dual-Ended.
Issue 03 (2012-11-30)
553
NOTE
If Protection Type is set to 1:1, Switching Mode defaults to Dual-Ended and is not for setting.
Click the NE in the NE Explorer. Choose Configuration > Packet Configuration > APS
Protection Management.
2.
Click the Protection Group tab. Right-click the APS protection group that is already
created and choose Start Protocol from the shortcut menu.
3.
A dialog box is displayed indicating that the operation is successful. Then, Protocol
Status of the APS protection group changes to Enabled.
----End
Prerequisites
l
Background Information
You can verify the MPLS Tunnel APS function according to the following aspects:
l
If a fault is generated on the network, the MPLS Tunnel APS can still be performed
normally.
If the protection group is set to the revertive mode, the service can be switched from the
protection tunnel to the working tunnel after the WTR time expires.
All the commands that trigger manual switching can be issued correctly.
Procedure
Step 1 If a fault is generated on the network, the MPLS Tunnel APS can still be performed normally.
1.
Issue 03 (2012-11-30)
554
2.
In the Main Topology, right-click the NE that you want to verify and choose NE
Explorer from the shortcut menu.
3.
Choose Configuration > Packet Configuration > APS Protection Management in the
Function Tree. In the pane on the right side, select the protection group that you want to
verify. Then, click Function > Query Switching Status to check whether the service is
switched from the working tunnel to the protection tunnel.
Step 2 If the protection group is set to the revertive mode and if the working tunnel recovers, the service
can be switched from the protection tunnel to the working tunnel after the WTR time expires.
1.
Reconnect the fiber that is disconnected in Step 1. After the WTR time expires, click
Function > Query Switching Status to check whether Active Tunnel is the specified
Working Tunnel.
Step 3 All the commands that trigger manual switching can be issued correctly.
1.
Select the protection group that you want to verify and click Clear under Function. Then,
click Query Switching Status under Function to check whether the command is issued
successfully.
2.
Repeat the preceding steps to check whether all commands that trigger manual switching,
such as Force Switching, Manual Switching to Working, Manual Switching to
Protection, Exercise Switching, and Lockout of Protection are issued successfully.
----End
14.12 Troubleshooting
This topic describes the symptom of common faults when the MPLS Tunnel APS function is
enabled and the solution to the relevant fault.
Symptom
When the network works normally, a condition that triggers the protection switching occurs on
the working tunnel. The services, however, fail to be automatically switched from the working
tunnel to the protection tunnel. Consequently, the services are interrupted. The symptoms are as
follows:
l
The MPLS Tunnel APS protection group is configured incorrectly. Thus the protection
tunnel cannot receive the APS frame. In this case, the protection fails and the following
alarms may be reported.
ETH_APS_PATH_MISMATCH
ETH_APS_TYPE_MISMATCH
ETH_APS_LOST
ETH_APS_SWITCH_FAIL
The protection tunnel is faulty and fails to provide protection. In this case, the following
alarms may be reported.
MPLS_TUNNEL_LOCV
MPLS_TUNNEL_MISMERGE
MPLS_TUNNEL_MISMATCH
MPLS_TUNNEL_EXCESS
Issue 03 (2012-11-30)
555
MPLS_TUNNEL_SD
MPLS_TUNNEL_SF
MPLS_TUNNEL_UNKNOWN
MPLS_TUNNEL_BDI
MPLS_TUNNEL_FDI
Possible Causes
l
Cause 1: The fibers or cables on the protection tunnel are connected incorrectly.
Cause 2: The OAM parameters of the working tunnel and protection tunnel are set
incorrectly.
Cause 3: The parameter values of the APS protection group at the two ends are different
from each other.
Flow Chart
Figure 14-6 shows the flow chart for rectifying a fault when the MPLS Tunnel APS function is
enabled.
Issue 03 (2012-11-30)
556
Figure 14-6 Flow chart for rectifying a fault when the MPLS Tunnel APS function is enabled
Start
No
Yes
No
Yes
No
Yes
No
Yes
No
Yes
No
Yes
No
Yes
No
Yes
Yes
No
Conact the technical support
engineers of Huawei.
End
Procedure
l
Cause 1: The fibers or cables on the protection tunnel are connected incorrectly.
1.
Cause 2: The working tunnel and protection tunnel are set incorrectly.
1.
Issue 03 (2012-11-30)
Reconnect the fibers or cables to each port and ensure that the fibers or cables are
connected correctly.
In the Main Topology, right-click the NE that you want to configure and choose NE
Explorer from the shortcut menu.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
557
2.
Choose Configuration > Packet Configuration > MPLS Management > Unicast
Tunnel Management in the Function Tree.
3.
Click the OAM Parameters tab, select the tunnel that realizes the MPLS Tunnel APS
function. Then, set Detection Packet Type to FFD, and set Detection Packet Period
(ms) to 3.3 or 10.
Cause 3: The parameter values of the APS protection group at the two ends are different
from each other.
1.
In the Main Topology, right-click the NE that you want to configure and choose NE
Explorer from the shortcut menu.
2.
3.
Click Query to query whether the settings of the APS protection group are the same.
If the settings of the APS protection group are different from each other, set the
protection tunnel in the APS protection group to the same value as the working tunnel.
In the Main Topology, right-click the NE that you want to configure and choose NE
Explorer from the shortcut menu.
2.
3.
4.
If the preceding operations are performed but the alarm related to the MPLS Tunnel APS
function persists, see the Alarms and Performance Events Reference to clear the alarm. If
the fault persists, contact Huawei technical support engineers for the solution.
----End
Issue 03 (2012-11-30)
Alarm Name
Description
ETH_APS_LOST
558
Issue 03 (2012-11-30)
Alarm Name
Description
ETH_APS_PATH_MIS
MATCH
ETH_APS_SWITCH_F
AIL
ETH_APS_TYPE_MISMATCH
MPLS_TUNNEL_LOC
V
MPLS_TUNNEL_MISMERGE
MPLS_TUNNEL_MISMATCH
MPLS_TUNNEL_Excess
MPLS_TUNNEL_SD
MPLS_TUNNEL_SF
559
Alarm Name
Description
MPLS_TUNNEL_UNKNOWN
MPLS_TUNNEL_BDI
MPLS_TUNNEL_FDI
Value
Description
Protection Group ID
For example, 1
Protection Type
Issue 03 (2012-11-30)
1+1, 1:1
Field
Value
Description
Switching Mode
Single-Ended, Dual-Ended
Revertive Mode
Non-Revertive, Revertive
WTR Time(m)
1 to 12
Default: 5
Hold-off Time(100ms)
Protocol Status
0 to 100
Default: 0
Enabled, Disabled
Switching Status
14.14.2 Parameters for Configuring MPLS Tunnel APS (in End-toEnd Mode)
This topic describes the parameters for configuring MPLS tunnel APS protection groups in endto-end mode.
Issue 03 (2012-11-30)
561
Value Range
Description
Protection Group ID
Protection Type
1+1, 1:1
Default: 1+1
Single-Ended, Dual-Ended
Default: Single-Ended
NOTE
When Protection Type is set to
1:1, Switching Mode is
defaulted to Dual-Ended and
cannot be changed.
Issue 03 (2012-11-30)
562
Field
Value Range
Description
MPLS Tunnel
For example, 1
For example, 1
MPLS Tunnel
For example, 1
For example, 1
Protocol Status
Enabled, Disabled
Issue 03 (2012-11-30)
563
Field
Value Range
Description
Switching Status
Non-Revertive, Revertive
WTR Time(min)
1-12
Default: 5
Hold-off Time(100ms)
0-100
Default: 0
Issue 03 (2012-11-30)
564
15 MS-PW
15
MS-PW
565
15 MS-PW
This topic describes the alarms and performance events that are related to the MS-PW.
15.11 MS-PW Parameters
This topic describes the parameters associated with a PW.
Issue 03 (2012-11-30)
566
15 MS-PW
15.1 Introduction
An MS-PW provides an edge-to-edge virtual connection by setting up static PW segments. The
S-PE at the tangent point of the access ring and the convergence ring swaps PW labels and
aggregates PWs. The number of tunnels on the convergence ring is reduced.
Definition
An MS-PW is set up between two PW terminating provider edges (T-PEs) and travels through
the PW switching provider edge (S-PE). At the S-PE, PW labels are swapped and then the MSPW is divided into two or more segments.
An MS-PW consists of multiple adjacent PW segments, and each PW segment is a point-topoint PW.
Purpose
If the equipment does not support MS-PW, Ethernet services can be transmitted over a PSN by
static MPLS tunnels.
l
At the ingress node, PW and tunnel labels are put on Ethernet packets.
In this service model, only the tunnel labels can be swapped at the transit node. Therefore, as
shown in Figure 15-1, users must configure edge-to-edge tunnels from the NodeB to the RNC.
The number of tunnels on the convergence ring PSN2 increases sharply as the number of NodeBs
increases.
Figure 15-1 Single-segment PW networking diagram
FE
Ingress node
Node B
Transit node
FE
PSN1
Egress
node
Node B
PSN2
GE
FE
Node B
RNC
PSN3
FE
Tunnel
Node B
Issue 03 (2012-11-30)
PW
567
15 MS-PW
As shown in Figure 15-2, the S-PE at the tangent point of the access ring and the convergence
ring terminates the tunnels on the access rings. All the PWs on the access rings are aggregated
into one tunnel. Therefore, the number of tunnels on the convergence ring is reduced.
Figure 15-2 MS-PW networking diagram
MS-PW
FE
Node B
T-PE
PSN1
FE
Node B
T-PE
S-PE
GE
PSN2
S-PE
FE
T-PE
RNC
T-PE
Node B
PSN3
FE
T-PE
Node B
Tunnel
PW
PW Segment
A PW segment is a part of the MS-PW, and is set up between two PEs. The two PEs can be two
T-PEs, two S-PEs, or one T-PE and one S-PE.
Issue 03 (2012-11-30)
568
15 MS-PW
15.4 Availability
The MS-PW function requires the support of the applicable equipment and boards.
Version Support
Issue 03 (2012-11-30)
Product Name
Applicable Version
U2000
Product Name
Applicable Version
U2000
569
15 MS-PW
Hardware Support
Board Type
Applicable Equipment
Version
Applicable Equipment
N1PETF8
N1PEG8
N1PEX2
N2PEX1
N1PEFF8
N1PEX1
N1PEG16
R1PEFS8
Q1PEGS2
R1PEGS1
R1PEF4F
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
Issue 03 (2012-11-30)
Applicable
Object
Remarks
MS-PW
MS-PW and
LPT
570
15 MS-PW
Applicable
Object
Remarks
MS-PW and
PW APS
l For the intermediate node that is configured with the MS-PW, the
OAM function cannot be
enabled and the APS protection
cannot be configured.
l For the intermediate node that is
enabled with the OAM function
and configured with the APS
protection, the MS-PW cannot
be set up.
Maintenance Principles
None.
15.6 Principles
This section describes the principles of MS-PW.
An MS-PW provides an edge-to-edge virtual connection by setting up static PW segments.
See Figure 15-3.
l
PW1 carries UNI-NNI E-Line services from CE1 and PW2 carries UNI-NNI E-Line
services from CE2.
MS-PW1 consists of PW1 and PW3, and MS-PW2 consists of PW2 and PW4.
PW1 is carried by tunnel 1, PW2 is carried by tunnel 2, and PW3 and PW4 are carried by
tunnel 3.
Issue 03 (2012-11-30)
571
15 MS-PW
Tunnel3
PW3
PW1
PW2
Tunnel2
CE2
PW4
S-PE
T-PE3
CE3
T-PE2
MS-PW2
PW
Tunnel
2.
S-PE swaps the labels of PW1 and PW3 and the labels of PW2 and PW4. In addition, SPE creates connection relationships between PW1 and PW3 and between PW2 and PW4.
3.
E-Line services between CE1/CE2 and CE3 are transmitted in an edge-to-edge manner.
Prerequisites
l
PWs between the T-PE nodes and the S-PE node must be configured.
Procedure
Step 1 In the NE Explorer, select the required NE, and choose Configuration > Packet
Configuration > MPLS Management > PW Management from the Function Tree. Then,
select MS PW in the right pane.
Step 2 Click New. In the dialog box that is displayed, set the general attributes of the MS-PW.
Issue 03 (2012-11-30)
572
15 MS-PW
NOTE
When Tunnel selection mode is set to Auto Select, you must set Peer LSR ID. In addition, click the Forward
Tunnel selection or Backward Tunnel Selection tab, and set the priority of automatically selecting the tunnel
type.
Step 3 Click the QoS tab, and configure the QoS parameters of the MS-PW.
NOTE
l It is recommended that you set the QoS parameters to the same values for the forward and backward PWs.
l If the forward and backward PWs traverse different domains, the bandwidth limit parameter needs to take
the same value for the forward and backward PWs, but the other parameters of the QoS can take different
values for the forward and backward PWs.
Step 4 Click the Advanced Attributes tab, and configure the advanced attributes of the MS-PW.
Issue 03 (2012-11-30)
573
15 MS-PW
Step 5 Click Apply. In the dialog box that is displayed, click Close.
----End
Tunnel 2
NodeB
NE2
LSR ID: 1.0.0.2
MS-PW 1
Tunnel 3
MS-PW 2
NE4
LSR ID: 1.0.0.4
RNC
PW
MS-PW
Tunnel
Issue 03 (2012-11-30)
574
15 MS-PW
Service Planning
Table 15-1 lists the planning for the general attributes of the MS-PW.
Table 15-1 Planning for the general attributes of the MS-PW
Parameter
MS-PW1
MS-PW2
ID
Name
MS-PW1
MS-PW2
MTU (bytes)
1500
1500
Service Type
Ethernet Service
Ethernet Service
Forward/
Backward PW
Forward PW
Backward PW
Forward PW
Backward PW
PW ID
40
60
50
70
PW Signaling
Type
Static
Static
Static
Static
PW Type
Ethernet
Ethernet
Ethernet
Ethernet
PW Direction
Bidirectional
Bidirectional
Bidirectional
Bidirectional
PW Incoming
Label
20
40
30
50
PW Outgoing
Label
20
40
30
50
Tunnel
Selection Mode
Manually
Manually
Manually
Manually
Tunnel Type
MPLS
MPLS
MPLS
MPLS
Tunnel No.
Peer LSR ID
1.0.0.1
1.0.0.4
1.0.0.2
1.0.0.4
Table 15-2 lists the QoS parameter planning for the MS-PW.
Table 15-2 QoS parameter planning for the MS-PW
Issue 03 (2012-11-30)
Parameter
Forward PW
Ingress
Forward PW
Egress
Backward PW
Ingress
Backward PW
Egress
Bandwidth
Limit
Enabled
Disabled
Enabled
Disabled
575
15 MS-PW
Parameter
Forward PW
Ingress
Forward PW
Egress
Backward PW
Ingress
Backward PW
Egress
Policy
CIR(kbit/s)
10240
10240
CBS(kbit/s)
PIR(kbit/s)
30720
30720
PBS(kbit/s)
EXP
LSP Mode
Uniform
Uniform
Table 15-3 lists the planning for the advanced attributes of the MS-PW.
Table 15-3 Planning for the advanced attributes of the MS-PW
Parameter
Value
Control Word
Use preferred
CW
Ping
Request VLAN
Prerequisites
l
E-Line services carried by PWs must be configured on the T-PE nodes (NE1, NE2, and
NE4) according to service planning.
Procedure
Step 1 In the NE Explorer, select NE3, and choose Configuration > Packet Configuration > MPLS
Management > PW Management from the Function Tree. Then, select MS PW in the right
pane.
Step 2 Click New. In the dialog box that is displayed, set the general attributes of MS-PW1.
The parameters of the general attributes of MS-PW1 are as follows.
Issue 03 (2012-11-30)
576
15 MS-PW
Parameter
ID
Name
MS-PW1
MTU (bytes)
1500
Service Type
Ethernet
PW ID
Forward PW: 40
Backward PW: 60
PW Signaling Type
Forward PW: 20
Backward PW: 40
Forward PW: 20
Backward PW: 40
Tunnel Type
Tunnel No.
Forward PW: 1
Backward PW: 3
Peer LSR ID
Step 3 Click the QoS tab, and configure the QoS parameters of MS-PW1.
The QoS parameters of MS-PW1 are as follows.
Parameter
Bandwidth Limit
CIR(kbit/s)
PIR(kbit/s)
EXP
Forward/Backward PW Ingress: 1
LSP Mode
Step 4 Click the Advanced Attributes tab, and configure the advanced attributes of MS-PW1.
The parameters of advanced attributes of MS-PW1 are as follows.
Issue 03 (2012-11-30)
Parameter
Control Word
Use preferred
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
577
15 MS-PW
Parameter
CW
Ping
ID
Name
MS-PW2
MTU (bytes)
1500
Service Type
Ethernet
PW ID
Forward PW: 50
Backward PW: 70
PW Signaling Type
Forward PW: 30
Backward PW: 50
Forward PW: 30
Backward PW: 50
Tunnel Type
Tunnel No.
Forward PW: 2
Backward PW: 3
Peer LSR ID
----End
Prerequisites
l
Issue 03 (2012-11-30)
578
15 MS-PW
Background Information
As shown in Figure 15-5, test the connectivity between NE1 and NE4 and between NE2 and
NE4 by performing a PW ping test at NE1 and NE2.
Figure 15-5 E-Line services carried by the MS-PW
NE1
PW 40
NE3
NodeB
Tunnel 1
PW 50
PW 60
MS-PW 1
Tunnel 3
PW 70
MS-PW 2
NE4
Tunnel 2
NodeB
RNC
NE2
PW
MS-PW
Tunnel
Table 15-4 Planning for the general attributes of the PW ping test
Parameter
MS-PW1
MS-PW2
Peer PW ID
60
70
Peer IP
10.2.3.4
10.2.3.4
NOTE
Procedure
Step 1 In the NE Explorer, select the required NE, and then choose Configuration > Packet
Configuration > MPLS Management > PW Management from the Function Tree.
Step 2 Click PW OAM Parameter and select PW40. Then, choose OAM Operation > Ping Test.
Step 3 In the Ping Test dialog box that is displayed, set the parameters of Ping Test.
Issue 03 (2012-11-30)
579
15 MS-PW
Parameter
Packet Count
EXP Value
TTL
255
Transmit Interval(10ms)
100
Packet Length
64
300
Response Mode
Peer PW ID
60
Peer IP
10.2.3.4
Step 4 Click Start Test. Check the result of the ping test.
Step 5 View the number of transmitted and received packets, packet loss ratio, and delay in the test
result list, and determine the PW status.
NOTE
When the received packets are less than the transmitted packets, it indicates that packet loss occurs.
Step 6 Select PW50 on NE1, and perform Steps 1 to 5 to perform a new ping test. Set the parameters
as follows.
TIP
Among the following parameters, only the value of Peer PW ID is different from the value that is set for
the preceding ping test.
Parameter
Packet Count
EXP Value
TTL
255
Transmit Interval(10ms)
100
Packet Length
64
300
Response Mode
Peer PW ID
70
Peer IP
10.2.3.4
----End
Issue 03 (2012-11-30)
580
15 MS-PW
Value Range
Description
ID
For example, 1
Service Type
Name
MTU(bytes)
46 to 9000
Deployment Status
Deployed, Undeployed
Issue 03 (2012-11-30)
Field
Value Range
Description
PW ID
For example, 1
Enable State
Enabled, Disabled
581
15 MS-PW
Field
Value Range
Description
PW Signaling Type
Static
Ethernet, Ethernet
Tagged Mode,
CESoPSN, SAToP
PW Type
Direction
Bidirectional
PW Encapsulation Type
MPLS
PW Ingress Label/Source
Port
For example, 17
NOTE
The value ranges from
16 to 32768, in step of
2048.
For the OptiX OSN
1500, the maximum
value is 2048. For the
OptiX OSN 3500/7500,
the maximum value is
32768.
For example, 18
NOTE
The value ranges from
16 to 32768, in step of
2048.
Issue 03 (2012-11-30)
Opposite LSR ID
For example,
10.70.71.123
Example: Up
Example: Down
582
15 MS-PW
Field
Value Range
Description
Compositive Working
Status
Up, Down
Tunnel Type
MPLS
Tunnel
For example, 43
Deployment Status
Deployed,
Undeployed
Tunnel Automatic
Selection Policy
Value Range
Description
Bandwidth Limit
Enabled, Disabled
Policy
Issue 03 (2012-11-30)
For example, 5
583
15 MS-PW
Field
Value Range
Description
1024-10000000, Unlimited
The Committed
Information Rate (Kbit/s)
parameter specifies the CIR
of the queue. The packets
whose rates are less than the
CIR can be forwarded. When
the rate of the packets is not
more than the CIR, all
messages can be forwarded.
If the rate of the packets is
more than the CIR, some
packets are discarded
according to a certain packet
discarding policy.
Default: 4294967295
(FFFFFFFFFF is invalid)
CBS(byte)
64-10000000
Default: 4294967295
(FFFFFFFFFF is invalid)
PIR(Kbit/s)
EXP
0, 1, 2, 3, 4, 5, 6, 7, None
Default: None
Issue 03 (2012-11-30)
584
15 MS-PW
Field
Value Range
Description
LSP Mode
Pipe, Uniform
Default: Uniform
Value Range
Description
PW ID
Example: 5
Control Word
Default: -
None, CW
Default: CW
None, Ping
Default: Ping
Request VLAN
NOTE
For the OptiX OSN equipment,
this parameter is not applicable
to a multi-hop PW.
TP ID
Issue 03 (2012-11-30)
1 to 65535
585
16
586
Packet-based linear MSP protects channelized STM-1 services between Hybrid MSTP devices
and RNC devices.
16.9 Creating Packet-Based Linear MSP
To protect the services transmitted on a point-to-point chain network, create packet-based linear
MSP on the packet plane.
16.10 Verifying Packet-Based Linear MSP
After a network is configured with Packet-Based linear MSP, services in the working path of
the Packet-Based linear MSP group can be protected.
16.11 Relevant Alarms and Performance Events
This topic describes the alarms and performance events that are related to Packet-Based Linear
MSP.
Issue 03 (2012-11-30)
587
NE B
Working path
Protection path
Protection
switching
NE A
Working path
NE B
Protection path
Issue 03 (2012-11-30)
588
Working path
NE B
Common
service
Common
service
Protection path
Protection switching
NE A
Common
service
Working path
NE B
Common
service
Protection path
Purpose
The packet-based linear MSP scheme uses the MSOH bytes K1 and K2 to implement automatic
protection switching once the working path fails, and thus to protect services.
Meanings of K Bytes
Table 16-1 provides the meanings of K bytes in the MSOH.
Table 16-1 Meanings of K bytes (linear MSP)
Issue 03 (2012-11-30)
K Byte
Meaning
K1 (bits 1 to 4)
The four bits carry the bridging request code. Table 16-2 provides
the meanings of the four bits.
589
K Byte
Meaning
K1 (bits 5 to 8)
The four bits indicate the number of the service signal to which
the bridging request corresponds. 0 represents the null signal,
1-14 represent normal service signals, and 15 represents the extra
service signal (applicable only to the 1:N mode).
K2 (bits 1 to 4)
The four bits carry the number of the service signal that bridges
the local end and the path. The value range of the four bits is the
same as the four bits (bits 5 to 8) of K1.
K2 (bit 5)
This bit indicates the protection mode. 1 represents the 1:N mode.
0 represents the 1+1 mode.
K2 (bits 6 to 8)
The three bits indicate the status. 000 represents the idle state,
111 represents the MS-AIS state, and 110 represents the MS-RDI
state.
NOTE
Issue 03 (2012-11-30)
K1 (bits 1 to bit 4)
Meaning
1111
1110
Forced switching
1101
1100
1011
1010
1001
Unused
1000
Manual switching
0111
Unused
0110
WTR
0101
Unused
0100
Exercise switching
0011
Unused
0010
Reverse request
0001
Non-revertive
0000
No request
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
590
NOTE
A reverse request assumes the priority of the bridging request to which it responds.
Protection Type
With regard to the protection scheme, packet-based linear MSP is available in the following
types: 1+1 packet-based linear MSP and 1:1 packet-based linear MSP.
l
With regard to the switching mode, packet-based linear MSP is available in the following types:
single-ended switching and dual-ended switching.
l
Single-ended switching
When one end detects a fault, protection switching occurs at the faulty end only, and the
other end remains the same.
Dual-ended switching
When one end detects a fault, protection switching occurs at both ends at the same time,
no matter whether the other end is faulty.
With regard to the revertive mode, packet-based linear MSP is available in the following types:
revertive mode and non-revertive mode.
l
Revertive mode
When an NE is in the switching state, the NE releases the switching and returns to the
normal state if the working path is restored to normal and this state lasts for a certain period.
The period after the working path is restored to normal and before the NE releases the
switching is called the WTR time. To prevent frequent switching events due to the unstable
working path, it is recommended that you set the WTR. In revertive mode, the working
path is restored to normal within the WTR time and the service is switched from the
protection path back to the working path.
Non-revertive mode
When an NE is in the switching state, the NE remains the current state unchanged even if
the working path is restored to normal, and the service is not switched from the protection
path back to the working path.
Issue 03 (2012-11-30)
591
Issue 03 (2012-11-30)
Switchi
ng State
Description
Channel
Bearing
Services
Force to
Standby
Protection
channel
Force to
Active
Working
channel
Switch
upon
Signal
Failure
Protection
channel
Signal
FailProtecti
on
Working
channel
Switch
upon
Signal
Degrada
tion
Protection
channel
Signal
Degrade
Protecti
on
Working
channel
Manual
to
Standby
Protection
channel
Manual
to Active
Working
channel
WaitToRestore
Protection
channel
Exercise
Working
channel
592
Switchi
ng State
Description
Channel
Bearing
Services
NonRevertiv
e
Switchin
g
Protection
channel
Idle
Working
channel
Unknow
n
Lockout
Working
channel
16.3 Specifications
This section describes the capacity of the OptiX OSN equipment for supporting packet-based
linear MSP specifications.
Table 16-3 provides specifications associated with packet-based linear MSP.
Table 16-3 Specifications associated with packet-based linear MSP
Item
Specifications
Support capability
20
Protection Type
Revertive Type
l Revertive
l Non-revertive
l Revertive: 6 s to 720 s, 600 s by default.
WTR Time
Issue 03 (2012-11-30)
Switching Time
50 ms
Hold-Off Time
0 s to 10 s (0 s by default)
593
Item
Specifications
Switching Condition
(Any of the Following
conditions Triggers the
Switching.)
l R_LOS
l R_LOF
l MS_AIS
l B2_EXC
l B2_SD (optional)
l Forced switching
l Manual switching
l Exercise switching
l Clear switching
l Lockout switching
16.5 Availability
The packet-based linear MSP function requires the support of the applicable equipment and
boards.
Version Support
Issue 03 (2012-11-30)
Product Name
Applicable Version
U2000
Product Name
Applicable Version
U2000
594
Hardware Support
Board Type
Applicable Equipment
Version
Applicable Equipment
N1CQ1
TNN1CO1
TNN1AFO1
Remarks
Packet-based
linear MSP
Packet-based
linear MSP
Packet-based
linear MSP
and ATM
PWE3
Maintenance Principles
None.
16.7 Principles
Packet-based linear MSP uses the K1 and K2 bytes in the MSOH of an SDH frame to transmit
the protocol information and thus to control the receive and transmit trails of the service. In this
manner, packet-based linear MSP provides protection for the MS-layer service. Generally,
packet-based linear MSP is classified into 1+1 packet-based linear MSP and 1:N packet-based
linear MSP.
Issue 03 (2012-11-30)
595
Any of the preceding conditions can trigger the packet-based linear MSP switching. This topic mainly
considers the fault in the protection path as an example to describe how to trigger the packet-based linear
MSP switching.
4.
5.
Before the switching, the source end sends service signals to the working path and the
protection path. The sink end selects the service signal from the working path.
When detecting that the signal in the working path fails, the line board at the sink end in a
certain direction (NE A) reports the SF event to the system control board.
The system control finds that the signal in the working path fails and that the signal in the
protection path is normal, it instructs the cross-connect board to complete the crossconnection between the protection path and the service sink.
NE A receives the service signal from the protection path.
NE B receives the service signal from the working path.
When the signal in the working path fails, the dual-ended switching of 1+1 packet-based linear
MSP is implemented as follows:
1.
2.
3.
4.
5.
Before the switching, the source end sends service signals to the working path and the
protection path. The sink end selects the service signal from the working path.
When detecting that the signal in the working path fails, the sink end in a certain direction
(NE A) sends the K bytes to the source end (NE B) through the protection path (the request
type is "signal fail").
NE B sends the K bytes to NE A also through the protection path (the request type is "reverse
request").
NE A receives the service signal from the protection path.
NE B also receives the service signal from the protection path.
Figure 16-3 Realization principle of 1+1 Packet-based linear MSP (before the switching)
NE B
Working
path
NE A
Protection
path
Working
path
Protection
path
Common service
Issue 03 (2012-11-30)
596
Figure 16-4 Realization principle of 1+1 Packet-based linear MSP (after the switching, in singleended mode)
NE B
Working
path
NE A
Protection
path
Working
path
Protection
path
Common service
Figure 16-5 Realization principle of 1+1 Packet-based linear MSP (after the switching, in dualended mode)
NE B
Working
path
NE A
Protection
path
Working
path
Protection
path
Common service
Issue 03 (2012-11-30)
Before the switching, the source end and the sink end send and receive normal service
signals in the working path, and can not send and receive extra service signals in the
protection path.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
597
2.
When detecting that the signal in the working path fails, the sink end in a certain direction
(NE A) sends the K bytes to the source end (NE B) through a path between NE A and NE
B (the request type is "signal fail").
3.
NE B bridges the normal service signals into the protection path and sends the K bytes to
NE A through a path between NE A and NE B (the request type is "reverse request").
4.
NE A receives normal service signals from the protection path and bridges the normal
service signals into the protection path.
5.
Figure 16-6 Realization principle of 1:1 Packet-based linear MSP (before the switching)
NE B
Working
path
NE A
Protection
path
Working
path
Protection
path
Common service
Figure 16-7 Realization principle of 1:1 Packet-based linear MSP (after the switching)
NE B
Working
path
NE A
Protection
path
Working
path
Protection
path
Common service
598
21-CQ1-1
3-PEG8
NE1
21-CQ1-2
RNC
NOTE
l In Figure 16-8, NE1 is an OptiX OSN 3500 NE. Other products have the same service configurations
and differ only in boards' valid slots. For slots valid for the boards that a product supports, see the
hardware description of the product.
l For details on the boards that an RNC device supports, see specific RNC documents.
As shown in Figure 16-8, TDM links are present between NE1 and the RNC; NE1 is connected
to the RNC through its CQ1 board. The STM-1 services that the RNC receives are transmitted
to NE1 through the CQ1 board and further to the PSN through the PEG8 board. To protect the
channelized STM-1 services between NE1 and the RNC, linear MSP needs to be configured on
both NE1 and the RNC.
NOTE
For details on how to configure the STM-1 services that are transmitted from the RNC to NE1 through the
CQ1 board and further to the PSN through the PEG8 board, see the configuration guide (packet transport
domain). On NE1, configure packet-based linear MSP; on the RNC, configure linear MSP. For
configurations on the RNC, see specific RNC documents.
1+1 packet-based linear MSP: Normally, common services are transmitted in the working
path and protection path, and are received from the working path. When the working path
between NE1 and the RNC fails, services are received from the protection path.
1:1 packet-based linear MSP: Normally, common services are transmitted in the working
path. When the working path between NE1 and the RNC fails, the services in the working
path are switched to the protection path.
Prerequisites
l
Services must not be configured on the port where the protection channel exists.
Procedure
Step 1 In the NE Explorer, select the NE from the Object Tree and then choose Configuration > Packet
Configuration > Packet-Based Linear MS from the Function Tree.
Step 2 Click Create. The system displays the Create a Linear Multiplex Section dialog box.
Step 3 Set the attributes of the linear MSP group.
Issue 03 (2012-11-30)
599
NOTE
l The protection type can be set to 1+1 or 1:N. When the protection type is set to 1:N, 1:1 protection is
supported.
l When the protection type is set to 1:1, the extra traffic is not carried over the protection path.
The APS protocol triggers protection switching when a fault occurs only after the APS protocol is started.
After you click Stop Protocol, the APS protocol cannot trigger protection switching when a fault occurs
because the APS protocol is not started.
----End
Follow-up Procedure
l
l
Issue 03 (2012-11-30)
600
Before deleting a packet-based linear MSP protection group, check whether the service is
transmitted on the working channel. If the service is transmitted on the protection channel,
switch the service to the working channel and then delete the packet-based linear MSP
protection group.
After a packet-based linear MSP protection group is deleted, the carried services are not
protected and therefore are affected.
Prerequisites
l
The Packet-Based linear MSP must be created and configured on the U2000.
Procedure
Step 1 Check the switching status of the Packet-Based Linear MSP protection group under test.
1.
On the Main Topology of the U2000, right-click the NE configured with Packet-Based
Linear MSP. Choose NE Explorer from the shortcut menu to display the NE Explorer
window.
2.
In the Function Tree of NE Explorer, choose Configuration > Packet Configuration >
Packet-based Linear MS.
3.
Click Query and choose Query Protection Group from the shortcut menu to refresh the
configuration of protection groups on the NE.
4.
Click Query and choose Query Switching Status from the shortcut menu. Then, check
West Switching Status of the working and protection units in the protection group under
test. West Switching Status of both units should be Idle.
Step 2 Disable the working port of the LMSP protection group under test.
1.
In NE Explorer, select the board configured with the LMSP protection and choose
Configuration > Packet Configuration > Interface Management > SDH Interface from
the Function Tree.
2.
On the General Attributes tab, select the working port in the Packet-Based Linear MSP
protection group and set Laser Interface Enabling Status to Close.
Step 3 Check the switching status. If West Switching Status of either the working unit or protection
unit is Switching, it indicates a successful switching.
Step 4 Disable the working port of the Packet-Based Linear MSP protection group under test with
reference to Step 2.
Step 5 Revert the services to the working tunnel of the LMSP protection group.
l
Issue 03 (2012-11-30)
If Revertive Mode of the Packet-Based Linear MSP protection group is set to Revertive,
the services are reverted to the working tunnel when WTR expires.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
601
If Revertive Mode of the Packet-Based Linear MSP protection group is set to NonRevertive, select the protection group and click the Inter-Board Mapping Relation tab,
right-click Protection Unit in Protection Unit, and choose Manual Switching to
Working from the shortcut menu.
----End
Meaning
CES_APS_MAN
UAL_STOP
CES_APS_FAIL
CES_APS_INDI
CES_K1_K2_M
CES_K2_M
CES_MS_APS_I
NDI_EX
602
Issue 03 (2012-11-30)
603
17
Issue 03 (2012-11-30)
604
17.1 Overview
The power consumption control function helps users manage the power consumption of high
power-consuming data boards and thus ensures the normal operation of the equipment.
Definition
The power consumption of an NE contains the following two types:
l
Physical power consumption: refers to the sum of the power consumption of all the physical
boards configured on the NE.
Logical power consumption: refers to the sum of the power consumption of all the logical
boards configured on the NE.
The logical board of any board cannot be created when the physical power consumption or the
logical power consumption of an NE exceeds the maximum power consumption.
NOTE
For details about the power consumption values of equipment subracks and boards, see Hardware
Description.
Purpose
The power consumption of equipment boards (specially, data boards) is constantly increasing.
When an NE is configured with maximum number of large power-consuming data boards, the
power board of the NE may fail to support the actual power consumption of the NE, thus resulting
in the abnormal operation of the NE. To prevent this, users can manage the power consumption
of the large power-consuming data boards by using the power control function.
17.2 Availability
The power consumption control function requires the support of the applicable equipment,
boards, and software.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
T2000
605
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
l
The NG-SDH equipment series support the power consumption control function.
The boards that support the power consumption control function on the NG-SDH
equipment series are as follows:
Applicable Board
Applicable Version
Applicable Equipment
N1EMS4
N1EGS4
N3EGS4
N1EAS2
17.3 Principles
Users can manage the power consumption of high power-consuming data boards by using the
power consumption control function. When the maximum power consumption of an NE fails to
meet the power consumption requirement of the actual configuration, the boards that support
the power consumption control function fail to work normally.
NOTE
When the actual power consumption of an NE exceeds its permissible maximum power consumption, the
logical boards for all the newly inserted boards cannot be created successfully.
The board that supports the power consumption control function may be in either of the following
two states:
l
Issue 03 (2012-11-30)
Low power consumption state: In this state, service chips of the board are not initialized,
and the board cannot work normally. In this case, the board makes a request to the NE
software for entering the high power consumption state.
606
High power consumption state: In this state, all the service chips of the board are initialized,
and the board can work normally.
Figure 17-1 shows the migration between the low power consumption state and the high power
consumption state of a board.
Figure 17-1 Migration of power consumption states
Successful
board addition
Low power
consumption
state
Cold resetting
of the board
High power
consumption
state
Successful
board deletion
If a board can be successfully added, the NE software issues a command to enable it to enter the
high power consumption state. In this case, the board enters the high power consumption state.
After the service chips of the boards are initialized, the board starts to work normally.
If a board is being deleted or is not configured, or if a cold reset is performed on a board, the
board remains in the low power consumption state.
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 Choose Configuration > NE Batch Configuration > Power Management from the Main
Menu.
Step 2 In the Object Tree, select one or more NEs and click
Step 3 In the NE Power Consumption Threshold(W) field, set the NE power consumption threshold.
NOTE
A default value is set in the NE Power Consumption Threshold(W) field for each type of NE. It is
recommended that you do not modify the value.
607
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 Choose Configuration > NE Batch Configuration > Power Management from the Main
Menu.
Step 2 Query the power consumption according to the requirement.
l
2.
3.
2.
3.
----End
Issue 03 (2012-11-30)
Alarm Name
Meaning
BD_AT_LOWPOWER
608
Alarm Name
Meaning
NE_POWER_OVER
Issue 03 (2012-11-30)
609
18
Issue 03 (2012-11-30)
610
NNI side
NE2
NE1
PW
APS/Tunnel
APS
NodeB
E-LAN
Tunnel
APS
A
MC-LAG
E-LAN
NE3
Dual-homed node
Non-dual-homed node
Multi-chassis synchronous
communication channel
Signal flow
Working channel
Protection channel
Standby link
(carrying no services)
Tunnel
Issue 03 (2012-11-30)
611
Protection objects: NNI-side service channels, dual-homed nodes, and links on the UNI
side of dual-homed nodes
MPLS tunnel APS/PW APS/PW FPS and MC-LAG are independent of each other and then the
faults on the NNI side and on the UNI side can be processed separately. Therefore, efficiency
of protection switching and service reliability are improved.
Working Principle
As shown in Figure 18-2, E-Line services from the NodeB are transmitted to the RNC over the
PSN; NE1, NE2, and NE3 work with the RNC to achieve dual-homing protection of E-LAN
services.
Figure 18-2 RNC dual-homing
UNI side
NNI side
NE2
NE1
W
E-LAN
PW APS/MPLS
tunnel APS
NodeB
RNC
LAG1 A
MC-LAG
E-LAN
S
LAG3
LAG2
NE3
Dual-homed node
Non-dual-homed node
Multi-chassis synchronous
communication channel
Signal flow
Working channel
Protection channel
Standby link
(carrying no services)
The RNC dual-homing protection solution consists of the NNI-side MPLS tunnel APS/PW
APS/PW FPS and the UNI-side MC-LAG between the dual-homed nodes.
Issue 03 (2012-11-30)
612
On the NNI side, a 1:1 MPLS tunnel APS/PW APS/PW FPS protection group is configured
on NE1.
On the UNI side, a LAG (LAG1) is configured on NE2 and a LAG (LAG2) is configured
on NE3 to form an MC-LAG between NE2 and NE3. In addition, a LAG (LAG3) is
configured on the RNC.
Meanwhile, E-LAN services are configured on the two dual-homed nodes (NE2 and NE3), and
UNI-NNI E-Line services are configured on NE1.
Normally, on the NNI side, services are carried on the working channel in the MPLS tunnel
APS/PW APS/PW FPS protection group; on the UNI side, MC-LAG functions to select either
of NE2 or NE3 as the active equipment for service forwarding. For example, if NE2 is selected
as the main equipment, services are carried by LAG1. Generally, the working channel in the
MPLS tunnel APS/PW APS/PW FPS protection group on the NNI side and the working link in
the MC-LAG protection group on the UNI side are specified to ensure that the working channel
on the NNI side and the working link on the UNI side are connected to the same dual-homed
node. Figure 18-2 shows the signal flow.
l
The UNI-Side Link Connected to the Working Node of the Dual-Homed Nodes Fails
The following paragraphs and figures describe the working principle of the RNC dual-homing protection
solution when PW APS is configured on the NNI side. When MPLS tunnel APS is configured on the NNI
side, the working principle is similar.
Scenario 1: The UNI-Side Link Connected to the Working Node of the Dual-Homed
Nodes Fails
As shown in Figure 18-3, when the UNI-side link (LAG1) connected to the working node of
the dual-homed nodes fails, the protection switching is implemented as follows:
l
When the RNC detects a fault on the UNI-side working link (LAG1) that carries services,
the services are switched to the UNI-side protection link (LAG2). The services from the
RNC are transmitted to the protection port in the MC-LAG (port A on NE3).
Within the PSN, PW APS switching is not triggered. On NE3, the services are broadcast
within the E-LAN and then the services are transmitted to NE2 over the MC-link. Within
the packet network, the services are still carried by the working PW (PW1).
Issue 03 (2012-11-30)
613
Figure 18-3 Fault on the UNI-side link connected to the working node of the dual-homed nodes
NE2
A
NE1
PW1
E-LAN
PW APS
NodeB
LAG1
E-LAN
PW2
RNC
S
MC-LAG LAG3
A
A LAG2
NE3
Dual-homed node
Non-dual-homed node
Multi-chassis synchronous
communication channel
Signal flow
Working PW
Protection PW
A
S
Figure 18-3 shows the signal flow after the protection switching.
After the faulty link recovers, the way that services are restored is similar to the preceding
switching process.
When NE1 detects a fault on the working PW (PW1) by means of the PW OAM mechanism,
NE1 triggers PW APS switching. Services are switched to the protection PW (PW2).
Switching in the MC-LAG is not triggered. On NE3, the services are broadcast within the
E-LAN and then the services are transmitted to NE2 over the MC-link. On the UNI side,
services are still transmitted through the working port (port A on NE2) in the MC-LAG
group.
Issue 03 (2012-11-30)
614
NE1
PW1
A
P
LAG1
E-LAN
PW APS
NodeB
PW2
MC-LAG
W
RNC
A
LAG3
E-LAN
A
LAG2
NE3
Dual-homed node
Non-dual-homed node
Multi-chassis synchronous
communication channel
Service direction
Working PW
Protection PW
A
S
Figure 18-4 shows the signal flow after the protection switching.
After faulty PW recovers, the way that services are restored is similar to the preceding switching
process.
When NE1 detects a fault on the working PW (PW1) by means of the PW OAM mechanism,
NE1 triggers the PW APS switching. Services are switched to the protection PW (PW2).
NE3 bidirectionally bridges the protection PW (PW2) onto the UNI-side protection link
(LAG2).
When the RNC detects a fault on the UNI-side working link (LAG1) that carries services,
the services are switched to the UNI-side protection link (LAG2).
Issue 03 (2012-11-30)
615
PW1
E-LAN
PW APS
NodeB
PW2
LAG3
MC-LAG
W
RNC
S
A
E-LAN
LAG2
A
NE3
Dual-homed node
Non-dual-homed node
Multi-chassis synchronous
communication channel
Signal flow
Working PW
Protection PW
A
S
Figure 18-5 shows the signal flow after the protection switching.
After faulty node recovers, the way that services are restored is similar to the preceding switching
process.
CAUTION
The current protection solution cannot protect services when the following combinations of
faults occur:
l The UNI-side link fails after the MC-link is faulty: On the UNI side, the switching process
in the MC-LAG is similar to that as described in scenario 1. After the switching in the MCLAG, however, the services from the RNC are transmitted to the protection port (port A on
NE3). The MC-link is faulty, and as a result, the services fail to be transmitted to the working
channel (PW1) within the PSN.
l A fault occurs on the PSN after the MC-link is faulty: When a fault occurs on the PSN, the
PW APS switching is triggered and then the services are carried on the protection PW (PW2).
The UNI-side link works properly, and therefore the services are still carried by the working
port (port A on NE2). The MC-link is faulty, and as a result, the services received at PW2
fail to be transmitted to the NE where the working port in the MC-LAG group resides.
Issue 03 (2012-11-30)
616
3. Configure protection
on the UNI side
b. Configure an
MC-LAG
End
Table 18-1 describes the operations in the configuration procedure for RNC dual-homing
protection.
Table 18-1 Flowchart for configuring RNC dual-homing
St
ep
Procedure
Description
Configure E-LAN services on the two dualhomed nodes, and bridge the UNIs, NNIs, and the
ports on the MC-link.
The multi-chassis synchronous communication
tunnel and the MC-link are carried by the same
physical link.
Issue 03 (2012-11-30)
617
St
ep
Procedure
Configuri
ng
protection
on the
UNI side
Description
Configuring the
multi-chassis
synchronous
communication
channel
Configuring an MCLAG
Issue 03 (2012-11-30)
618
Figure 18-7 illustrates how the EoS services from only one NodeB are converged at NE2 and NE3. This
serves as an example for your easy understanding. On a live network, multi-level convergence of EoS
services is common, but the scenario in each time of convergence is similar.
5-EDQ41 NE2
3-PEG8-1
7-SL1
RNC
3-PEG8-2
2-EFS4
TDM
domain
NE4
3-PEG8-3
3-PEG8-2
Packet
domain
3-PEG8-1
12-SL1
NE1
3-PEG8-1
5-EDQ41
3-PEG8-2
NE3
As shown in Figure 18-8, the following protection schemes can be configured for the services
that traverse two domains:
l
At NE4, configure SNCP to protect the EoS services from the NodeB.
At the convergence nodes NE2 and NE3, configure LAGs and an MC-LAG to protect crossdomain services in the TDM domain and packet domain respectively.
At NE1, NE2 and NE3, configure MPLS tunnel OAM or PW OAM. At NE1, configure
MPLS tunnel APS or PW APS to protect the packet services.
NOTE
MPLS tunnel APS and PW APS cannot exist at the same time. In this topic, configuration of MPLS tunnel
APS serves as an example. You can also select PW APS according to actual requirements.
Issue 03 (2012-11-30)
619
SNCP
MPLS tunnel
OAM (PW
OAM)
LAG
EPL services
MC-LAG NE2
NE4
RNC
E-Line
services
MPLS tunnel
APS (PW APS)
NE1
LAG
NE3
MPLS tunnel
OAM (PW
OAM)
E-LAN services
1.Configuring
Configure network
1.
network
ports
5. Configure
MPLS tunnels
2.2.Configuring
servicesinin
Configure services
the TDM domain
6. Configure
an MC-LAG
3. Configure SNCP
in the TDM domain
7. Configure MPLS
tunnel APS in the packet
domain
4. Configure LAGs
8.8.Configuring
servicesinin
Configure services
the packet domain
End
Issue 03 (2012-11-30)
620
NOTE
Configurations of services in the TDM domain, services in the packet domain, and cross-domain protection
do not need to be performed in a strict sequence. In the figure, the sequence numbers are used to facilitate
description.
Procedure
Description
Configuring
network ports
Configuring
services in the
TDM domain
Configuring
SNCP for the
services in the
TDM domain
Configuring
LAGs
Configuring
MPLS tunnels
Configuring
an MC-LAG
Issue 03 (2012-11-30)
621
Ste
p
Procedure
Description
Configuring
protection in
the packet
domain
Configuring
services in the
packet domain
Protection Solution
Figure 18-10 shows a protection solution for a network wherein the TDM domain and the packet
domain overlap. Generally, NE1 is an OptiX OSN 1500 or OptiX 155/622H (OptiX Metro 1000)
NE; NE2, NE3, and NE4 are OptiX OSN 3500/7500/7500 II NEs.
NOTE
The TDM network can be configured with MSP or SNCP, and no details are provided. This topic describes
the protection solution applicable to the packet network.
A packet network generally uses the RNC dual-homing method for protection. For details, see
18.1 RNC Dual-Homing. The protection solution is configured as follows:
l
Issue 03 (2012-11-30)
622
UNI side 1
UNI side 2
NE3
/
LSP
king
r
o
W
PW
E1
BTS
BSC
NE2
FE
NodeB
NE1
Pro
tect
ion
PW LSP/
RNC
NE4
TDM network
Packet switched
network
Working Principle
As shown in Figure 18-11, services are present between the NodeB and the RNC.
l
NNI side
Working path: NE1-NE2-NE3; protection path: NE1-NE2-NE4
UNI side 2
Working path: NE3-RNC; protection path: NE4-RNC
UNI side 1
UNI side 2
NE3
/
LSP
king
Wor W
P
E1
BTS
BSC
2
NE2
FE
NodeB
NE1
Pro
tect
ion
PW LSP/
RNC
NE4
TDM network
Packet switched
network
Issue 03 (2012-11-30)
623
In normal situations, the signal flow between the NodeB and the RNC is as follows: NodeBNE1-NE2-NE3-RNC. When a fault occurs on the network, the signal flow changes as follows:
l
A fault on the working path on the NNI side (only at fault point 1)
On the NNI side, services are switched to the protection path; on the UNI side 2, services
are still transmitted over the working path.
The signal flow is as follows: NodeB-NE1-NE2-NE4-NE3-RNC.
A fault on the working path on the UNI side 2 (only at fault point 2)
On the NNI side, services are still transmitted over the working path; on the UNI side 2,
services are switched to the protection path.
The signal flow is as follows: NodeB-NE1-NE2-NE3-NE4-RNC.
Faults on the working paths both on the NNI side and the UNI side 2 (at fault points 1 and
2)
On the NNI side, services are switched to the protection path; on the UNI side 2, services
are switched to the protection path.
The signal flow is as follows: NodeB-NE1-NE2-NE4-RNC.
NOTE
l When any fault occurs on the working path on the NNI side, the protection switching process is similar.
This topic describes the protection switching process when a fault occurs at fault point 1 for your
reference.
l When a fault occurs on the protection path, services are not affected or switched. No details are
provided.
624
NOTE
The network between the routers and the RNC requires a protection solution applicable to Layer 3 networks,
and no details are provided. If using Huawei routers, see the user manuals of the routers for configuration
methods.
NE1
FE
Main LSP/PW
L3
L2
PWE3
Router 1
NE3
NodeB
St
an
db
y
VRRP
LS
P/
PSN
PW
NE2
RNC
Router 2
Working Principle
As shown in Figure 18-13, services are present between the NodeB and the RNC.
l
This topic describes only the protection switching between the Hybrid MSTP equipment, and no details
about the protection switching between the routers and the RNC are provided.
Figure 18-13 Faults on the hybrid network consisting of Hybrid MSTP equipment and routers
NE1
FE
Main LSP/PW
L3
L2
PWE3
Router 1
NE3
NodeB
St
an
db
y
VRRP
LS
P/
PSN
PW
Issue 03 (2012-11-30)
NE2
RNC
Router 2
625
In normal situations, the signal flow between the NodeB and the RNC is as follows: NodeBNE1-NE3-Router1-RNC. When a fault occurs on the network, the signal flow changes as
follows:
l
A fault on the working path on the Layer 2 network (only at fault point 2)
Within the PSN, services are still transmitted on the working path; within the Layer 2
network, services are switched to the protection path.
The signal flow is as follows: NodeB-NE1-NE3-NE2-Router2-Router1-RNC.
NOTE
When NE1, NE2, and NE3 are configured with PW APS/PW FPS protection, enable the IWF function
on NE2 and NE3. In this case, when a fault occurs at fault point 2, the system inserts an FDI packet
to the PW between NE3 and NE1 to trigger PW APS/PW FPS. After PW APS/PW FPS is performed,
services are switched to the protection path.
Faults on the working paths both on the PSN and the Layer 2 network (at fault points 1 and
2)
Within the PSN network, services are switched to the protection path; within the Layer 2
network, services are switched to the protection path.
The signal flow is as follows: NodeB-NE1-NE2-Router2-Router1-RNC.
Protection Solution
Figure 18-14 shows a protection solution when Hybrid MSTP equipment and PTN equipment
comprise a network. Generally, NE1 is an OptiX PTN 910/950 NE; NE2, NE3, and NE4 are
OptiX OSN 3500/7500/7500 II NEs.
In this solution, the RNC dual-homing method is used. For details, see 18.1 RNC DualHoming. The protection solution is configured as follows:
l
Issue 03 (2012-11-30)
626
NNI side 1
UNI side 2
NE4
king
Wor
FE
/PW
LSP
NE2
NodeB
NE1
RNC
Prot
ectio
n LS
P/PW
NE3
Hybrid MSTP
equipment
PTN
equipment
Working Principle
As shown in Figure 18-15, services are present between the NodeB and the RNC.
l
NNI side
Working path: NE1-NE2-NE4; protection path: NE1-NE2-NE3.
UNI side 2
Working path: NE4-RNC; protection path: NE3-RNC
Figure 18-15 Faults on the hybrid network consisting of Hybrid MSTP equipment and PTN
equipment
UNI side 1
NNI side
1
/PW
LSP
g
n
i
k
Wor
FE
NodeB
UNI side 2
NE4
NE2
NE1
Prot
ectio
n LS
RNC
P/PW
NE3
Hybrid MSTP
equipment
PTN
equipment
Issue 03 (2012-11-30)
627
In normal situations, the signal flow between NodeB and the RNC is as follows: NodeB-NE1NE2-NE4-RNC. When a fault occurs on the network, the signal flow changes as follows:
l
A fault on the working path on the NNI side (only at fault point 1)
On the NNI side, services are switched to the protection path; on the UNI side 2, services
are still transmitted on the working path.
The signal flow is as follows: NodeB-NE1-NE2-NE3-NE4-RNC.
A fault on the working path on the UNI side 2 (at only fault point 2)
On the NNI side, services are still transmitted on the working path; on the UNI side 2,
services are switched to the protection path.
The signal flow is as follows: NodeB-NE1-NE2-NE4-NE3-RNC.
Faults on the working paths both on the NNI side and the UNI side 2 (at fault points 1 and
2)
On the NNI side, services are switched to the protection path; on the UNI side 2, services
are switched to the protection path.
The signal flow is as follows: NodeB-NE1-NE2-NE3-RNC.
NOTE
l When any fault occurs on the working path on the NNI side, the service protection switching is similar.
This topic describes the protection switching process when a fault occurs at fault point 1 for your
reference.
l When a fault occurs on the protection path, services are not affected or switched. No details are
provided.
Protection Solution
Figure 18-16 shows a protection solution when Hybrid MSTP equipment and RTN equipment
comprise a network. Generally, NE1 is an OptiX PTN 910 or OptiX RTN 950 NE; NE2 is an
OptiX OSN 1500 or OptiX 155/622H (OptiX Metro 1000) NE; NE3, NE4, and NE5 are OptiX
OSN 3500/7500/7500 II NEs.
1:1 MPLS tunnel APS or 1:1 PW APS is used on the NNI side 1.
The RNC dual-homing method is used for protecting the network between the Hybrid MSTP
equipment and the RNC. For details, see 18.1 RNC Dual-Homing.
l
Issue 03 (2012-11-30)
628
NNI side 1
Working LSP/
PW
NE1
FE
NE2
NNI side 2
/
LSP
king
r
o
W
PW
NE3
Protecti
NodeB
UNI side 2
NE4
on LSP
/P
RNC
Protection
LSP/PW
NE5
Hybrid MSTP
equipment
RTN
equipment
Working Principle
As shown in Figure 18-17, services are present between the NodeB and the RNC.
NOTE
On the NNI side 1, the RTN equipment is interconnected with the Hybrid MSTP equipment, 1:1 MPLS
tunnel APS or 1:1 PW APS is configured to protect the interconnection between the RTN equipment and
the Hybrid MSTP equipment.If a fault occurs on the working path on the NNI side 1, the signal flow on
the NNI side 2 is not affected. Therefore, this topic focuses on the signal flow on the NNI side 1 and UNI
side 2.
NNI side 2
Working path: NE2-NE3-NE4; protection path: NE2-NE3-NE5
UNI side 2
Working path: NE4-RNC; protection path: NE5-RNC
Issue 03 (2012-11-30)
629
Figure 18-17 Faults on the hybrid network consisting of Hybrid MSTP equipment and RTN
equipment
UNI side 1
NNI side 1
Working LSP/
PW
FE
NE1
NE2
NNI side 2
1
kin
Wor
P/PW
g LS
NE3
Protecti
NodeB
UNI side 2
NE4
RNC
on LSP
/PW
Protection LSP/
PW
NE5
Hybrid MSTP
equipment
RTN
equipment
In normal situations, the signal flow between the NodeB and the RNC is as follows: NodeBNE1-NE2-NE3-NE4-RNC. When a fault occurs on the network, the signal flow changes as
follows:
l
A fault on the working path on the NNI side 2 (only at fault point 1)
On the NNI side 2, services are switched to the protection path; on the UNI side 2, services
are still transmitted on the working path.
The signal flow is as follows: NodeB-NE1-NE2-NE3-NE5-NE4-RNC.
A fault on the working path on the UNI side 2 (only at fault point 2)
On the NNI side 2, services are still transmitted on the working path; on the UNI side 2,
services are switched to the protection path.
The signal flow is as follows: NodeB-NE1-NE2-NE3-NE4-NE5-RNC.
Faults on the working paths both on the NNI side 2 and the UNI side 2 (at fault points 1
and 2)
On the NNI side 2, services are switched to the protection path; on the UNI side 2, services
are switched to the protection path.
The signal flow is as follows: NodeB-NE1-NE2-NE3-NE5-RNC.
NOTE
l When any fault occurs on the working path on the NNI side, the service protection switching is similar.
This topic describes the protection switching process when a fault occurs at fault point 1 for your
reference.
l When a fault occurs on the protection path, services are not affected or switched. No details are
provided.
630
19 RMON
19
RMON
Issue 03 (2012-11-30)
631
19 RMON
All statistics data is saved at the agent and the out-of-service operation on the manager is
supported.
Based on the preceding purposes, the RMON function defines a serial of statistic formats and
functions to realize the data exchange between the control stations and detection stations that
comply with the RMON standards. To meet the requirements of different networks, the RMON
function provides flexible detection modes and control mechanism. In addition, the RMON
function provides error diagnosis, and planning and information receiving of the performance
events of the entire network.
19.2.1 SNMP
Currently, the Simple Network Management Protocol (SNMP) is the most widely used network
management protocol in the network. The SNMP is used to ensure transport of the management
information between any two nodes in the network. This facilitates the network administrator
to retrieve information, modify information, locate a fault, perform fault diagnosis, plan capacity,
and generate a report on any node in the network.
NMS
The NMS is a workstation where the client program is running.
The NMS can send the request packets to the agent. After receiving these request
packets, the agent performs corresponding operations according to the packet types,
generates the response packets, and send the response packets to the NMS.
l
Issue 03 (2012-11-30)
Agent
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
632
19 RMON
The agent is server software that is running on the network equipment. It is embedded
in the Ethernet unit.
When an exception occurs in the equipment or the state of the equipment changes (for
example, the equipment restarts), the agent sends the Trap packet to the NMS and reports
the event to the NMS.
The transmission of SNMP packets is based on the connectionless transport layer UDP. Hence,
the equipment can be connected to a wide variety of equipment without a block.
MIB
The management information base (MIB) refers to the managed variables in SNMP packets that
are used to describe the managed objects in the equipment. The SNMP uses the architecture
naming solution to uniquely identify each managed object in the equipment. The overall
architecture is like a tree. The nodes on the tree indicate the managed objects. Each node can be
uniquely identified by a path starting from the root. The MIB is used to describe the architecture
of the tree and is the collection of the definitions of the standard variables of the monitored
network equipment.
NOTE
The OptiX OSN equipment uses Huawei proprietary MIB. For contents that the third-party system
monitored based on SNMP, consult field Huawei engineers.
Issue 03 (2012-11-30)
633
19 RMON
19.4 Availability
The RMON function requires the support of the applicable equipment and boards.
Version Support
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
U2000
Issue 03 (2012-11-30)
Applicable Board
Applicable Version
Applicable Equipment
N1EMS4
N1EGS4
N3EGS4
634
Issue 03 (2012-11-30)
19 RMON
Applicable Board
Applicable Version
Applicable Equipment
N1EFS4
N2EFS4
N2EFS0
N4EFS0
N2EGS2
N1EAS2
N1EGT2
R1EFT4
N1EFT8
N1EFT8A
N2EMR0
N2EGR2
N3EFS4
N3EGS2
N4EGS4
N2EGT2
N5EFS0
N1EFS0A
N1EMS2
635
19 RMON
Applicable Version
Applicable Equipment
N1PEG16
N1PEX1
N1PETF8
R1PEFS8
Q1PEGS2
R1PEGS1
N1PEG8
N2PEX1
N1PEX2
N1PEFF8
N1EDQ41
R1PEF4F
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
Issue 03 (2012-11-30)
636
19 RMON
Remarks
Management
group
Issue 03 (2012-11-30)
637
19 RMON
Applicable
Object
Remarks
Statistics
Maintenance Principles
None.
Issue 03 (2012-11-30)
638
19 RMON
Project Description
Project R uses the chain networking that consists of station A, station B, station C, station D,
and station E. Figure 19-1 shows the networking diagram of project R. For example, in the case
of a unidirectional service, station E needs to monitor the Ethernet services on station A.
Figure 19-1 Networking diagram of project R
Station C
Station D
Station A
Ethernet
Board
Ethernet
Board
Station B
Station E
Smartbits
In the case of project R, an Ethernet service GE1 exists between station A and station E. You
can enable the RMON function to realize the remote monitoring between station E and station
A. By querying the RMON performance of the corresponding Ethernet service board on station
E, you can learn the information, such as the service performance and alarms, of the Ethernet
board on the transmit end (that is, station A).
Prerequisites
l
Procedure
Step 1 Select the corresponding Ethernet board in the NE Explorer. Choose Performance > RMON
Performance from the Function Tree.
Step 2 Click the History Group tab.
Step 3 Select the object from the drop-down list of Object. Set the range of the performance to be
browsed, Ended From, To, History Table Type, and Display Mode.
Issue 03 (2012-11-30)
639
19 RMON
Step 4 Click Query, and the information about the history group performance of the Ethernet port is
displayed.
----End
Prerequisites
l
Procedure
Step 1 Select the corresponding Ethernet board in the NE Explorer. Choose Performance > RMON
Performance from the Function Tree.
Step 2 Click the Statistics Group tab.
Step 3 Select the object from the drop-down list of Object. Set the range of performance to be browsed,
Query Conditions, and Display Mode.
Step 4 Click Start. The information about the statistics group performance of the Ethernet port is
queried and then displayed.
----End
Prerequisites
You must be an NM user with NE monitor authority or higher.
Context
The equipment supports the RMON performance monitoring for the physical port and the service
object. Hence, you can customize the RMON performance attribute template for the physical
port and the service object.
Procedure
Step 1 Select the NE in the NE Explorer. Use the following method to display the interface for browsing
the performance of the physical port or the service object.
Issue 03 (2012-11-30)
640
19 RMON
Type (Example)
Entry
Unicast Tunnel
Ethernet service
NOTE
If you need to display the interface for browsing the performance of other service objects, refer to the
examples listed in the table to find the proper entry.
Prerequisites
You must be an NM user with NE monitor authority or higher.
Procedure
Step 1 Select the NE in the NE Explorer. Choose Performance > RMON History Control Group
from the Function Tree.
Step 2 Set parameters that are related to 30-Second, 30-Minute, Custom Period 1, and Custom Period
2.
NOTE
Step 3 Click Apply. A dialog box is displayed, indicating that the operation is successful.
Issue 03 (2012-11-30)
641
19 RMON
Networking Diagram
An E-Line service that is carried by the pseudo wire (PW) is taken as an example, as shown in
Figure 19-2.
l
User A1 accesses NE1 through 21-PETF8-1, and User B1 accesses NE1 through 21PETF8-2.
User A2 accesses NE2 through 21-PETF8-1, and User B2 accesses NE2 through 21PETF8-2.
User A1 and User A2 need to communicate with each other. User B1 and User B2 to
communicate with each other.
Figure 19-2 Networking diagram for the E-Line service carried by the PW
UNI for A1:21-PETF8-1
UNI for B2:21-PETF8-2
PSN
User A2
User A1
NE 1
NE2
User B2
User B1
NNI:3-PEG16-1
NNI:3-PEG16-1
Unicast tunnel
PW
Issue 03 (2012-11-30)
642
19 RMON
Symptom
The data service between User A1 and User A2 becomes abnormal (the service is interrupted
or certain packets are lost).
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 Configure the Ethernet services that are carried by the PW. For details, see Configuration
Example: E-Line Services Carried by PWs.
Step 2 Refer to 19.7.4 Configuring an RMON History Control Group to enable the RMON function
of monitoring the performance in the history control groups.
The parameters for the history control group are set as follows:
l History table type: 30 seconds
l Monitored objects: NE
l Number of items: 50 (16 by default)
Step 3 Refer to 19.7.3 Customizing the RMON Performance Attribute Template to enable the
RMON function of monitoring the performance of the service.
The parameters are set as follows:
l History table type: 30 seconds
l Monitored objects: V-UNI, PW
l Monitoring status: Enabled
CAUTION
To query the statistics of the PW performance, you need to set the QoS policy for the PW.
Step 4 Refer to 19.7.1 Browsing the History Group Performance of an Ethernet Port and 19.7.2
Browsing the Statistics Group Performance of an Ethernet Port to query the performance
of the V-UNI on the Ethernet board at the same period.
Step 5 Analyze the RMON performance and check whether the data is proper.
l If the PAUSE frame exists, proceed to Step Step 7.
l If the FCS error exists, see the description of the handling of the FCS error performance event
in Alarms and Performance Events Reference.
l If the collision or fragment exists, see the description of the handling of the collision
performance event in Alarms and Performance Events Reference.
Issue 03 (2012-11-30)
643
19 RMON
l If the problem is about the maximum transmission unit (MTU), set the MTU to a proper
value.
1.
In the NE Explorer, select the Ethernet board. Choose Configuration > Ethernet
Service Management > E-Line Service in the Function Tree.
2.
Step 6 Check the relation of the traffic between the ports on the Ethernet board.
l If the input traffic is equal to the output traffic, it indicates that the fault does not occur on
the board but may occur at the interconnection point between Huawei equipment and the
equipment of the customer. Enable the ping function to check the link between Huawei
equipment and the equipment of the customer.
l If the input traffic is not equal to the output traffic, it indicates that the fault occurs on Ethernet
board. Check the relation of the traffic between the V-UNI of the Ethernet board and the
corresponding PW to determine the board where the fault (congestion or signal degrade)
occurs.
Step 7 Enable the ping function to check whether the fault occurs on the link between Huawei equipment
and the interconnected data communication equipment.
----End
Issue 03 (2012-11-30)
Name of a Performance
Entry
Description
Unit
Collisions/ETHCOL
times(times/s)
644
Issue 03 (2012-11-30)
19 RMON
Name of a Performance
Entry
Description
Unit
Drop Events/ETHDROP
times(times/s)
Fragments/ETHFRG
packets(packets/s)
Broadcast Packets
Received/RXBRDCAST
packets(packets/s)
packets(packets/s)
645
Issue 03 (2012-11-30)
19 RMON
Name of a Performance
Entry
Description
Unit
Octets Received/
RXOCTETS
Byte(Byte/s)
packets(packets/s)
packets(packets/s)
packets(packets/s)
packets(packets/s)
packets(packets/s)
646
19 RMON
Name of a Performance
Entry
Description
Unit
packets(packets/s)
packets(packets/s)
Undersized packets
received/ETHUNDER
packets(packets/s)
packets(packets/s)
Issue 03 (2012-11-30)
Abbreviation
Description
Unit
frames(frames/s)
Control Frames
Transmitted/TXCTLPKTS
frames(frames/s)
Packets transmitted/
TXPKTS
packets(packets/s)
647
Issue 03 (2012-11-30)
19 RMON
Abbreviation
Description
Unit
Broadcast packets
transmitted /TXBRDCAST
packets(packets/s)
Multicast packets
transmitted /TXMULCAST
packets(packets/s)
packets(packets/s)
packets(packets/s)
packets(packets/s)
packets(packets/s)
648
Issue 03 (2012-11-30)
19 RMON
Abbreviation
Description
Unit
packets(packets/s)
packets(packets/s)
Bytes transmitted /
TXOCTETS
Byte(Byte/s)
Byte(Byte/s)
Byte(Byte/s)
Byte(Byte/s)
Byte(Byte/s)
packets(packets/s)
649
Issue 03 (2012-11-30)
19 RMON
Abbreviation
Description
Unit
packets(packets/s)
packets(packets/s)
packets(packets/s)
packets(packets/s)
packets(packets/s)
packets(packets/s)
650
19 RMON
Abbreviation
Description
Unit
frames(frames/s)
Alignment Errors/ETHALI
frames(frames/s)
FCS Errors/ETHFCS
frames(frames/s)
packets(packets/s)
Issue 03 (2012-11-30)
packets(packets/s)
Utilization/ETHUTILIZ
packets(packets/s)
651
19 RMON
Abbreviation
Description
Unit
Unicast Packets
Transmitted/TXUNICAST
packets(packets/s)
Oversize Packets
Transmitted/TXETHOVER
packets(packets/s)
PORT_RX_BYTES_AVAI
LABILITY
Issue 03 (2012-11-30)
Abbreviation
Description
Unit
Alignment Errors/
ETHALI
A count of frames
received on a particular
interface that are not an
integral number of octets
in length and do not pass
the FCS check.
frames(frames/s)
times(times/s)
Excessive Collisions/
ETHEXCCOL
frames(frames/s)
652
Issue 03 (2012-11-30)
19 RMON
Abbreviation
Description
Unit
FCS Errors/ETHFCS
A count of frames
received on a particular
interface that are an
integral number of octets
in length but do not pass
the FCS check. This count
does not include frames
received with frame-toolong or frame-too-short
error.
frames(frames/s)
Late Collisions/
ETHLATECOL
times(times/s)
Multiple Collision
Frames/ETHMULCOL
A count of successfully
transmitted frames on a
particular interface for
which transmission is
inhibited by more than
one collision.
frames(frames/s)
A count of successfully
transmitted frames on a
particular interface for
which transmission is
inhibited by exactly one
collision.
frames(frames/s)
packets(packets/s)
packets(packets/s)
653
Issue 03 (2012-11-30)
19 RMON
Abbreviation
Description
Unit
packets(packets/s)
packets(packets/s)
packets(packets/s)
packets(packets/s)
packets(packets/s)
654
Issue 03 (2012-11-30)
19 RMON
Abbreviation
Description
Unit
Deferred Transmissions/
TXDEFFRM
frames(frames/s)
Drop Events at
Transmission Direction/
TXETHDROP
times(times/s)
Pause Frames
Transmitted/TXPAUSE
frames(frames/s)
655
20 Clock Solution
20
Clock Solution
Frequency Synchronization
The OptiX OSN equipment supports the following frequency synchronization solutions:
l
CES ACR
The first two solutions are implemented at the physical layer, and the last two solutions are
implemented using software protocols at a non-physical layer.
Time Synchronization
Time synchronization is realized based on frequency synchronization. The OptiX OSN
equipment supports the IEEE 1588v2 time synchronization.
20.1 SDH Clock Synchronization
A stable clock ensures normal operation of an NE. To obtain a stable clock, adhere to certain
configuration principles.
20.2 IEEE 1588v2 Time Synchronization and Clock Synchronization
The equipment that complies with IEEE 1588v2 can realize the frequency and time
synchronization. The IEEE 1588v2 protocol ensures time synchronization at the microsecond
level.
20.3 Synchronous Ethernet Clock
The synchronous Ethernet clock is a technology that recovers the clock from the bit stream on
the Ethernet link.
20.4 CES ACR
This chapter describes the CES ACR feature.
Issue 03 (2012-11-30)
656
20 Clock Solution
20.1.1 Clock
Clock synchronization on the entire network helps to transmit services normally.
Definition
SDH clock synchronization is a technology of frequency synchronization at the physical layer
that is supported by all line boards. The system directly extracts clock signals from SDH signals,
and transmits the clock signals to each board, therefore implementing the transfer of clock
information.
Purpose
Clock synchronization ensures that all the digital devices on a communications network work
at the same nominal frequency, and therefore minimizes the impacts of slips, burst bit errors,
phase jumps, jitters, and wanders on digital communications systems. Clock synchronization
also minimizes pointer justifications on SDH devices. Therefore, clock synchronization is the
precondition and basis for the normal operation of a network.
Pseudo synchronization means that the clocks of the NEs are totally independent of each
other. The clocks, however, are of high precision and stability. Normally, cesium clocks
are used. Thus, pseudo synchronization is maintained among the clocks although
frequencies and phases of the clocks are not exactly the same.
657
20 Clock Solution
level clocks. In this way, clock synchronization is maintained on the entire network. In maser/
slave synchronization, the working modes of the slave clock are as follows:
l
Tracing mode: It is the normal working mode. In this mode, the local clock is synchronized
with the input reference clock signals.
NOTE
The nodes that have clock tracing relationships between each other comprise a clock subnet.
Holdover mode: When all timing reference signals are lost, the slave clock enters into the
holdover mode. In this mode, the slave clock takes timing reference from the last frequency
information saved before the loss of timing reference signals. This mode can be used to
cope with an interruption of external timing signals lasting many days. The holdover mode
is available in two types: permanent holdover and 24-hour holdover.
Free-run mode: When all timing reference signals are lost, the slave clock enters the freerun mode if the slave clock loses the configuration data about the timing reference signals
or the slave clock fails to enter the holdover mode.
External clock source: 2M timing signals from the external clock interface of an NE
Line clock source: timing signals extracted from optical signals that are received by the
line board
Tributary clock source: timing signals extracted from optical signals that are received by
the tributary board
Internal clock source: internal timing source (available for each NE) that an NE uses when
the external source is lost
Microwave clock source: timing signals extracted from received microwave signals
S1 Byte
The S1 byte is located in row 9, column 1 in the multiplex section overhead in an SDH frame
structure. The lower four bits (bits 5-8) of the S1 byte are allocated to transport a synchronization
status of an NE, which is referred to as the synchronization status message byte (SSMB). Table
20-1 shows the meaning of clock quality that the SSMB stands for. The smaller the SSMB value,
the higher the quality of the clock source that the SSMB represents.
In a clock network, the node that is connected to an external clock extracts a reference timing
source from the BITS equipment, writes an SSMB to bits 5-8 of S1 byte, and transports the
SSMB to downstream nodes. In this way, the SSMB is output. A downstream node extracts the
timing source from a line signal, and obtains the clock quality level from bits 5-8 of S1 byte. In
this way, the downstream node often determines whether the current clock source is effective
and transmits back 0xf to the upstream node through bits 5-8 of S1 byte. 0xf means the returned
clock source is unavailable. This prevents two nodes from tracing the timing source mutually.
Each node obtains the quality of all clock sources from the S1 byte, and chooses to trace a clock
source according to the preset priority level.
Issue 03 (2012-11-30)
658
20 Clock Solution
0000
0001
Retained
0010
0011
Retained
0100
SSU-ANote 1
0101
Retained
0110
Retained
0111
Retained
1000
SSU-BNote 1
1001
Retained
1010
Retained
1011
1100
Retained
1101
Retained
1110
Retained
1111
Note 1: The "G.812 Transit Exchange" and "G.812 Local Clock" terms are used in the previous
version of ITU-T Recommendations. In the new version of ITU-T G.812, the clock definition
is changed to synchronization supply unit (SSU). The SSU is available in A and B types. The
SSU-A corresponds to the "G.812 Transit Exchange" and the SSU-B corresponds to the "G.
812 Local Clock" that is previously used.
SSM Protocol
In the case that the S1 byte is enabled for clock protection, the concept of clock ID is introduced.
That is, clock protection is extended based on the original SSM protocol. In this manner, the
extended SSM protocol is developed.
The standard SSM protocol is a network synchronization management mechanism. It uses bits
5-8 of S1 byte to exchange the quality information of clock sources between nodes. This ensures
that the equipment automatically selects the clock source of the highest quality and priority, to
prevent an interlock of clocks. The standard SSM protocol improves the performance of a
synchronous network, and realizes synchronization of different network architectures in an easy
manner. The standard SSM protocol applies to the interconnection of equipment from different
vendors.
Issue 03 (2012-11-30)
659
20 Clock Solution
In the case of the extended SSM protocol, Huawei introduces the concept of clock ID based on
the standard SSM protocol. The extended SSM protocol uses bits 1-4 of S1 byte as the unique
ID of a clock source and transmits the clock ID with an SSM. After a node receives the S1 byte,
the node verifies the clock ID (bits 1-4) to determine whether the clock is locally sent. If the
clock is locally sent, the node considers the clock unavailable. This prevents a timing loop. The
extended SSM protocol is mainly used to realize the interconnection of HUAWEI transmission
equipment.
Clock ID
A clock ID uses bits 1-4 of S1 byte, and the value range is 0x0 to 0xf. Basically, a clock ID is
used to distinguish the clock information between local and other nodes, to prevent a node from
tracing the clock signal that is locally sent and comes from the negative direction. Hence, a
timing loop is prevented.
A value of 0 indicates that a clock ID is invalid. Hence, the clock ID takes the default value 0
when the clock ID is not required. When enabling the extended SSM protocol, an NE does not
select the clock source whose ID is 0 as the current clock source.
A clock ID is a tag set for a reference timing source. The clock sources at the same quality level
that carry different IDs mean different timing signals and are the same in priority level.
Set the clock ID according to the following principles:
l
Allocate a clock ID to the internal clock source of each node that has an external BITS.
Allocate a clock ID to the internal clock source of each node that enters another ring network
from one chain or ring network.
Allocate a clock ID to the line clock source of the node that enters another ring network
from one chain or ring network, when the line clock source is available.
When the clock source priority is set, the NE selects the clock source based on the priority
table if the SSM protocol is disabled. If the SSM protocol is enabled, however, the NE
selects the clock source with the best quality as the synchronization source, and sends the
synchronization status message bit (SSMB) to downstream NEs.
If multiple clock sources at the same quality level exist, the NE selects the clock source at
the highest priority level as the synchronization source, and sends the SSMB to downstream
NEs.
Issue 03 (2012-11-30)
660
20 Clock Solution
If NE B traces the clock source that is output from NE A, the clock of NE B is an unavailable
source for NE A.
If the extended SSM protocol is enabled, the NE does not select the clock that has the same
ID as the local clock or the clock whose ID is 0, to function as the synchronization source.
NE1
11
NE2
11
8
11
8
NE3
NE6
11
11
NE5
11
NE4
Clock Signal Flow
Table 20-2 provides the clock source priorities for the NEs.
Table 20-2 Clock source priority table and traced clock source
Issue 03 (2012-11-30)
NE
NE1
BITS
NE2
NE3
661
20 Clock Solution
NE
NE4
NE5
NE6
As shown in Figure 20-2, when a fiber cut occurs between NE1 and NE6, NE6 fails to receive
clock signals from NE1 and then, according to the clock priority table, chooses to trace the clock
source on the board in slot 8. Therefore, interlocked clocks are generated between NE5 and NE6.
Figure 20-2 Clock tracing upon fiber cut when the SSM protocol is disabled
BITS
BITS
NE1
NE1
11 8
NE2
NE3
NE6
11
8
11
11
8
11
8
NE5
11
11
NE2
11
11
11
NE3
11
NE6
NE5
8 11
NE4
NE4
Clock Signal Flow
When the SSM protocol is disabled, protection switching provides limited functions and easily
causes interlocked clocks. In a system where standard SSM protocol is enabled, however, such
problems do not occur.
662
20 Clock Solution
An NE selects the clock source whose quality is the best and priority is the highest as the
system clock source.
Each NE transmits the 0xff message to the system clock source. The message indicates that
the clock source is unavailable.
As shown in Figure 20-1, the clock tracing relationship in a system where the standard SSM
protocol is enabled is different from that in a non-SSM system.
When a fiber cut occurs between NE1 and NE6, the clock tracing relationship on the entire
network changes. As shown in Figure 20-3, NE5 transmits the 0xff message to NE6 because
NE5 traces the clock source of NE6. When detecting that the clock source on the board in slot
11 is lost, NE6 sets its internal clock source as the clock source and transmits the clock signals
to NE5 through the S1 byte. After receiving the clock signals from NE6, NE5 starts to select a
new clock source because the clock signals from NE5 degrade. According to the clock priority
table, NE5 selects the clock source on the board in slot 8 as the system clock source and sends
NE6 the clock quality information through the S1 byte. After receiving the S1 byte, NE6 chooses
to trace the clock source on the board in slot 8. In this manner, clock tracing relationship is
established on the entire network.
Figure 20-3 Clock tracing upon fiber cut in a system where standard SSM protocol is enabled
BITS
BITS
NE1
NE1
11 8
NE2
NE3
NE6
11
8
11
11
8
11
8
11
NE5
11
NE2
11
11
11
NE3
11
NE6
NE5
8 11
NE4
NE4
Clock Signal Flow
If the clock tracing relationship is configured as shown in Figure 20-4, all the NEs involved
trace the clock sources in one direction only.
Issue 03 (2012-11-30)
663
20 Clock Solution
BITS
NE1
11
NE2
11
8
11
8
NE3
NE6
11
11
NE5
8
11
NE4
Clock Signal Flow
When the BITS is faulty, a timing loop is generated, as shown in Figure 20-5. When the BITS
is faulty, NE1 chooses to trace the clock signals received by the board in slot 8, because NE1
and NE6 send each other the S1 byte (containing information about the BITS clock). Therefore,
a timing loop is generated on the entire network.
Figure 20-5 Clock tracing upon faulty BITS in a system where standard SSM protocol is enabled
BITS
NE1
NE1
11 8
NE2
NE3
NE6
11
8
11
11
8
11
8
11
NE5
11
NE2
11
11
11
NE3
11
NE6
NE5
8 11
NE4
NE4
Clock Signal Flow
Issue 03 (2012-11-30)
664
20 Clock Solution
According to the preceding analysis, the standard SSM protocol prevents the generation of
interlocked clocks but cannot prevent the generation of timing loops. The extended SSM
protocol, however, prevents the generation of timing loops.
An NE selects the clock source whose quality is the best and priority is the highest as the
system clock source. In addition, the clock source must be provided by NEs other than the
NE itself.
Each NE transmits the 0xff message to the system clock source. The message indicates that
the clock source is unavailable.
The most significant four bits of the S1 byte are used to identify the clock source ID.
When the clock tracing relationship is shown in Figure 20-4 and the extended SSM protocol is
enabled, a timing loop is not generated even though the BITS is faulty. As shown in Figure
20-6, NE1 broadcasts its internal clock source to the neighbor NEs after determining not to trace
the clock source of the board in slot 8 and determining that the quality of the clock from NE2 is
not good. Thus, all the NEs on the network trace the internal clock source of NE1.
Figure 20-6 Clock tracing in an extended SSM system in the case of faulty BITS
BITS
NE1
NE1
11 8
NE2
NE3
NE6
11
8
11
11
8
11
8
11
NE5
11
NE2
11
11
11
NE3
11
NE6
NE5
8 11
NE4
NE4
Clock Signal Flow
The extended SSM protocol is a proprietary protocol of Huawei. It effectively prevents the
generation of timing loops by using clock ID.
665
20 Clock Solution
f1
Desynchronization
Retiming buffer
f0
Output tributary signal
Issue 03 (2012-11-30)
Figure 20-8 shows how PDH signals are transmitted over an SDH transmission network
without retiming. The reference frequency f1 of equipment 1 at the transmit end locks on
f0 to prevent periodical slips. When PDH signals are adapted into the SDH transmission
network, pointer justifications cause phase jumps of output PDH signals, and frequency f1
of the output PDH signals becomes asynchronous with f0. As a result, the frequency of
output signals cannot be used as a timing reference for equipment 2 (a digital stored program
control switch, for example).
666
20 Clock Solution
PRC
f1
Synchronous
service
equipment 1
f0
f1
f0
S
S
SDH
MUX
SDH
MUX
f1
Synchronous
service
equipment 2
U
The frequency of tributary signal
cannot be used as the synchronous
clock of equipment 2
f1:frequency of the PDH signal
f0: frequency of the SDH timing reference
S: synchronization
U: desynchronization
R: retiming
PRC:SDH reference clock
Figure 20-9 shows how PDH signals are transmitted over an SDH transmission network
with retiming. The reference frequency f1 of equipment 1 at the transmit end locks on f0
to prevent periodical slips. At the network output end, a local timing reference f0 is provided
to absorb wanders and jitters generated in the pointer justification process. Thus, the
frequency of output tributary signals f1 remains synchronous with f0 and equipment 2 can
extract the output tributary signals for synchronization purpose.
PRC
f0
f0
U
S
Synchronous
service
equipment 1
f1
S
S
SDH
MUX
SDH
MUX
U
U
U
f0
SEC
f0
Synchronous
service
equipment 2
Issue 03 (2012-11-30)
667
20 Clock Solution
ITU-T G.812: Timing requirements of slave clocks suitable for use as node clocks in
synchronization networks
20.1.4 Availability
The SDH clock synchronization function requires the support of the applicable equipment,
boards, and software.
Version Support
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Issue 03 (2012-11-30)
Applicable Board
Applicable Version
Applicable Equipment
SDH Boards
OptiX OSN
1500/2500/3500/3500 II/
7500/7500 II
668
20 Clock Solution
Applicable Board
Applicable Version
Applicable Equipment
PDH Boards
OptiX OSN
1500/2500/3500/3500 II/
7500/7500 II
Microwave Intermediate
Frequency (IF) Board
OptiX OSN
1500/2500/3500/3500 II/
7500
N1CQ1
N1MD75
N1MD12
R1ML1
TNN1AFO1
TNN1CO1
TNN1D75E
TNN1D12E
20.1.5 Principles
Clock synchronization is implemented through processes of selection of the clock source,
extraction of the clock signal, and transmission of the clock signal on an SDH network.
As shown in Figure 20-10, clock synchronization of an SDH network is implemented as follows:
l
A master clock is specified, and then the clocks of the other NEs on this network use this
clock as a reference. As shown in Figure 20-10, the clock of NE1 is specified as the master
clock.
The clock signal of BITS is sent to NE1 through the external clock interface.
NE1 places this clock signal into the STM-N signal and then transmits the STM-N signal
to its neighbors, NE2 and NE6.
In the case of the other NEs on the network, each NE collects the clock signals that are
received by all the local boards, and compares the priority and quality of the clock signal,
thus to select an optimal clock source.
After the NE extracts the clock, the PLL traces the optimal clock source and locks on its
phase. Then, the system clock is synchronized with the optimal clock source.
The NE sends the system clock to the downstream NE through the line board. In this
manner, the clock signal can be transmitted downstream.
Issue 03 (2012-11-30)
669
20 Clock Solution
NE1
11
NE2
11
8
NE3
NE6
8
11
11
11
NE5
11
NE4
Clock Signal Flow
General Guidelines
When you configure clocks, follow the general guidelines:
l
At the backbone layer and convergence layer, adopt clock protection and configure the
active and standby reference clock sources to perform switching if necessary. At the access
layer, generally, configure one reference clock source at the central station and enables the
other stations to trace the clock of the central station.
When the building integrated timing supply system (BITS) or another external clock device
with high precision is available, an NE can trace the external clock source. If no BITS or
another external clock device with high precision is available, an NE can trace the line
clock source. The internal clock can work as the clock source with the lowest priority. If a
line clock source needs to be traced, ensure that the clock tracing route must be the shortest.
The details are provided as follows:
In the case of a ring network consisting of less than six NEs, these NEs can trace the
reference clock source in one direction.
In the case of a ring network consisting of six or more NEs, ensure that these NEs trace
the reference clock source in the shortest path. That is, in the case of a network consisting
of N NEs, a half of the NEs trace the reference clock source in one direction and the
other half of NEs trace the reference clock source in the other direction. (If N is an odd
number, the intermediate NE can trace the reference clock source in either of the
directions.)
Issue 03 (2012-11-30)
670
20 Clock Solution
Properly plan the clock synchronization network, to prevent interlocked clocks and clock
loops.
Use the clock extracted from STM-N signals to work as the inter-office clock. In this case,
do not use the tributary signal for timing.
If multiple NEs form a long chain, clock compensation is required. Otherwise, wanders are
generated in the clock signal after it is transmitted over several stations. ITU-T G.781
specifies that clock compensation is required if 20 or more NEs form a long chain.
Considering the transmission distance of fibers, clock compensation is performed in actual
application if more than 10 NEs form a long chain.
If the NE needs to select a clock source only based on the preset clock priorities and does
not consider the quality of clock sources, the SSM protocol can be disabled. In this case,
the clock network can be unidirectional but cannot adopt a ring topology.
If the NE needs to automatically select a clock source that is of the highest quality and with
the highest priority, the standard SSM protocol needs to be enabled. If the clock network
consists of Huawei equipment and third-party equipment, only the standard SSM protocol
can be enabled and the extended SSM protocol cannot be enabled. After the standard SSM
protocol is enabled, the clock network can be bidirectional but cannot adopt the ring
topology.
If the NE needs to automatically select a clock source that is of the highest quality and with
the highest priority and the clock network consists of only Huawei equipment, the extended
SSM protocol can be enabled. With the concept of clock ID introduced, the extended SSM
protocol effectively prevents timing loops on the clock network. After the extended SSM
protocol is enabled, the clock network can be bidirectional and adopt a ring topology;
however, the clock network cannot be intersecting or tangent with other networks.
All the NEs that trace the same clock source should be divided into the same clock subnet.
The clock tracing chain cannot be excessively long, so it is recommended that a subnet
contain not more than 20 NEs. Otherwise, clock precision may decline.
Issue 03 (2012-11-30)
671
20 Clock Solution
Sw itch a clock
source
Set switching
conditions for
clock sources
Switch a clock
source
Set switching
conditions for
clock sources
Switch a clock
source
NOTE
l The horizontal direction of the figure shows the three stages when you use the U2000 to configure
clocks.
l The vertical direction of the figure shows the relations between tasks at each stage.
Prerequisites
You must be an NM user with NE operator authority or higher.
Issue 03 (2012-11-30)
672
20 Clock Solution
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Clock > Physical Layer
Clock > Clock Synchronization Status from the Function Tree.
NOTE
To view the clock synchronization status of NEs in batches, choose Configuration > Batch Clock
Operation from the Main Menu. Then, click the Clock Synchronization Status tab. In the Object Tree,
select the target NEs and click
Step 2 Click Query. You can see the clock synchronization status information uploaded from the NE.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Background Information
To implement clock protection, you must configure at least two traceable clock sources for the
equipment. Usually, the tributary clock is not used as the clock source for the equipment.
After you set the clock sources for all the NEs, query the networkwide clock tracing status again.
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Clock > Physical Layer
Clock > Clock Source Priority from the Function Tree.
673
20 Clock Solution
NOTE
Internal clock sources have the lowest priority because of their low precision.
Step 6 Click Apply. In the Operation Result dialog box, click Close.
NOTE
If the clock tracing relationship changes because of change in the clock source, the Prompt dialog box is
displayed, asking you whether to refresh the clock tracing relationship. Click OK. If you select Disable
Prompting Next Time, no prompt is displayed even when the clock tracing relationship changes.
Step 7 Optional: If you select the line clock as the phase-locked source, you need to specify the priority
table for the phase-locked sources. Click the Priority Table for Phase-Locked Sources of 1st
External Clock Output tab, and then click Create. In the dialog box that is displayed, select
the phase-locked source, and click OK.
Step 8 Click Apply. In the dialog box that is displayed, click Close.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Clock > Physical Layer
Clock > Clock Subnet Configuration from the Function Tree.
Step 2 Click the Clock Subnet tab. Click Query to query the existing parameter settings.
Step 3 Select Start Standard SSM Protocol or Start Extended SSM Protocol.
NOTE
The same SSM protection protocol must be used for the same clock protection subnet.
Issue 03 (2012-11-30)
674
20 Clock Solution
Step 4 Set the subnet number of the clock subnet that the NE belongs to.
NOTE
Allocate the same subnet number to NEs tracing the same clock source.
Step 5 Optional: If the extended SSM protocol is enabled, set the clock ID of the clock source.
Step 6 Click Apply. In the Operation Result dialog box, click Close.
Step 7 Optional: If the clock ID is specified for the line clock of an NE, click the Clock ID Status tab,
and set the Enabled Status to Enabled. Click Apply. In the Operation Result dialog box, click
Close.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Clock > Physical Layer
Clock > Clock Source Switching from the Function Tree. Click the Clock Source Switching
Condition tab.
Step 2 Click Query to query the existing parameter settings.
Step 3 Double-click the parameter column and set the alarms and performance events that are to be
used as the clock source switching conditions to Yes.
Step 4 Click Apply. In the Operation Result dialog box, click Close.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Issue 03 (2012-11-30)
675
20 Clock Solution
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Clock > Physical Layer
Clock > Clock Source Switching from the Function Tree. Click the Clock Source Reversion
Parameter tab.
NOTE
To set the clock source reversion for multiple NEs in batches, choose Configuration > Batch Clock
Operation from the Main Menu. Click the Clock Source Reversion Parameter tab. In the Object Tree,
select the desired NEs and click
Step 2 Double-click and set the reversion mode and the WTR time.
NOTE
Do not set Clock Source WTR Time ( min) to 0. Otherwise, switching occur repeatedly when the clock
is unstable.
Step 3 Click Apply. In the Operation Result dialog box, click Close.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Clock > Physical Layer
Clock > Phase-Locked Source Output by External Clock from the Function Tree.
Step 2 Click Query to query the existing parameter settings.
Step 3 Set the parameters manually, such as External Clock Output Mode When 2M Output
Synchronous Source Is Invalid and Clock Source Threshold.
Step 4 Click Apply. In the Operation Result dialog box, click Close.
Step 5 Optional: In the case of the NG-SDH equipment, you can also set the external clock attributes
of the 2M phase-locked source. Click the 2M Phase-Locked Source External Clock
Attributes tab and set the parameters manually such as External Clock Output Switch,
External Clock Output Mode, and External Clock Output Timeslot. Click Apply. In the
Operation Result dialog box, click Close.
----End
Issue 03 (2012-11-30)
676
20 Clock Solution
Example
As shown in Figure 20-14, N NEs comprise a long transmission chain and the external BITS1
equipment is used as the clock synchronization source. After the transmission over several NEs,
the BITS1 clock signals are degraded to a certain extent. In this case, you can output the BITS1
signals from NEm that requires clock quality compensation to the local BITS2 equipment for
compensating the signals. Then, after the compensation, the clock signals are transmitted from
the BITS2 equipment to NEm, to function as the clock synchronization source of the downstream
equipment. The 2M phase-locked source of NEm should be the input clock source of the west
line board, and the clock synchronization source should be the BITS2 PRC input externally.
Figure 20-14 Typical application
BITS1
BITS2
west
NE1
east
west
west
NEm
NEn
To make sure that the BITS2 equipment receives clock signals from NEm correctly, you need
to set the output external clock of NEm. Perform the settings based on the parameters of the
BITS2 equipment and make sure that the settings on NEm are consistent with the settings on
the BITS2 equipment.
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Clock > Physical Layer
Clock > Clock Subnet Configuration from the Function Tree. Click the Clock Quality tab.
Step 2 Click Query to query the existing parameter settings.
Issue 03 (2012-11-30)
677
20 Clock Solution
Step 3 Click the Clock Source Quality tab and set Configuration Quality to a desired level.
NOTE
Step 4 Click Apply. In the Operation Result dialog box, click Close.
Step 5 If the quality level of a clock source is zero, you can specify the level manually. Click the Manual
Setting of 0 Quality Level tab and set Manual Setting of 0 Quality Level to a required level.
NOTE
To set the clock source quality for multiple NEs in batches, choose Configuration > Batch Clock
Operation from the Main Menu. Click the Manual Setting of 0 Quality Level tab. In the Object Tree,
select the target NEs and click
Step 6 Click Apply. In the Operation Result dialog box, click Close.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 In the NE Explorer, select the NE and choose Configuration > Clock > Physical Layer
Clock > Clock Subnet Configuration from the Function Tree. Click the SSM Output
Control tab.
Step 2 Set the Control Status of the clock source.
Issue 03 (2012-11-30)
678
20 Clock Solution
Step 3 Click Apply. In the Operation Result dialog box, click Close.
----End
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Clock > Physical Layer
Clock > Clock Source Switching from the Function Tree.
Step 2 Click the Clock Source Switching tab, and click Query to query the current switching status
of the current clock source.
Issue 03 (2012-11-30)
679
20 Clock Solution
Prerequisites
l
Precautions
CAUTION
Performing clock source switching may cause service interruption.
Procedure
Step 1 In the NE Explorer, select an NE and choose Configuration > Clock > Physical Layer
Clock > Clock Source Switching from the Function Tree. Click the Clock Source
Switching tab.
Step 2 Click Query to query the current switching status of a clock source.
Step 3 Optional: If the Lock Status is Lock, right-click and choose Release Lockout.
Step 4 Right-click the clock source that you want to switch and select a switching operation.
NOTE
Before switching the clock source, make sure that the target clock source is created in the priority table
and that the target clock source is not locked and that is of a good quality.
Issue 03 (2012-11-30)
680
20 Clock Solution
Step 5 Optional: To restore the automatic clock source selection mode, right-click the switched clock
source and choose Clear Switching.
----End
In the network, no external clock equipment is available and all the NEs are synchronized
with the internal clock source of a certain NE. As shown in Figure 20-15, NE2 to NE6 are
synchronized with the internal clock source of NE1. The clock signal flow in the following
figure shows the tracing situation of the clock signal when the network is normal.
In the network, the external clock equipment is available and all the NEs are synchronized
with this external clock source. As shown in Figure 20-16, NE1 to NE6 are synchronized
with the external clock source BITS. The clock signal flow in the following figure shows
the tracing situation of the clock signal when the network is normal.
Issue 03 (2012-11-30)
681
20 Clock Solution
11
8
NE2
NE3
NE6
11
11
11
11
NE5
8
8
11
NE4
Clock Signal Flow
11
Issue 03 (2012-11-30)
682
20 Clock Solution
NE1
11
8
NE2
NE3
8
NE6
11
11
11
8
11
8
NE5
11
NE4
Clock Signal Flow
11
Procedure
Step 1 Configure the clock source priority for each NE. For details, see Configuring NE Clock Sources.
Set the following parameter for NE1:
l Clock Source: Internal Clock Source
Set the following parameter for NE2 to NE4:
Issue 03 (2012-11-30)
683
20 Clock Solution
l Clock Source: line source provided by the optical interface of the upstream NE that is
connected to the line board in slot 8, Internal Clock Source
Set the following parameter for NE5 and NE6:
l Clock Source: line source provided by the optical interface of the upstream NE that is
connected to the line board in slot 11, Internal Clock Source
----End
Procedure
Step 1 Configure the clock source priority for each NE. For details, see Configuring NE Clock Sources.
On NE1, set the following parameters:
l Clock Source: external clock source 1 (BITS), internal clock source
l External Clock Source Mode
On NE2 to NE4, set the following parameters:
l Clock Source: line source provided by the optical interface of the upstream NE that is
connected to the line board in slot 8, internal clock source
On NE5 and NE6, set the following parameters:
l Clock Source: line source provided by the optical interface of the upstream NE that is
connected to the line board in slot 11, internal clock source
----End
684
20 Clock Solution
the NE cannot trace the active clock source, the NE switches to the standby clock source. See
Figure 20-17. NE1 through NE6 are all synchronized with the external clock source BITS1.
The clock signal flow in the following figure shows the tracing situation of the clock signal when
the network is normal.
Figure 20-17 Starting the standard SSM protocol
BITS1
11
NE1
NE2
NE6
11
11
11
NE5
11
NE4
BITS2
Service Planning
In this scenario, all NEs are divided to clock subnet 1. The mode of all external clock equipment
is 2 Mbit/s, and the timeslot is SA4.
The two external clock devices provide clock signals of the same level.
685
20 Clock Solution
Procedure
Step 1 Configure the clock source priority for each NE. For details, see Configuring NE Clock Sources.
On NE1, set the following parameters:
l Clock Source: external clock source 1 that connects to the BITS1 equipment, optical interface
that connects to the upstream NE through the line board in slot 11, internal clock source
l External Clock Source Mode: 2 Mbit/s
l Synchronous Status Byte: SA4
On NE2 and NE3, set the following parameters:
l Clock Source: optical interface that connects to the upstream NE through the line board in
slot 8, optical interface that connects to the downstream NE through the line board in slot
11, internal clock source
On NE4, set the following parameters:
l Clock Source: optical interface that connects to the upstream NE through the line board in
slot 8, optical interface that connects to the downstream NE through the line board in slot
11, external clock source 1 that connects to the BITS2 equipment, internal clock source
l External Clock Source Mode: 2 Mbit/s
l Synchronous Status Byte: SA4
On NE5 and NE6, set the following parameters:
l Clock Source: optical interface that connects to the upstream NE through the line board in
slot 11, optical interface that connects to the downstream NE through the line board in slot
8, internal clock source
Step 2 Enable the standard SSM protocol for each NE. For details, see Configuring the Clock Source
Protection.
On NE1 to NE6, set the following parameters:
l Protection Status: Start Standard SSM Protocol
Step 3 Configure the affiliated clock subnet of each NE. For details, see Configuring the Clock Source
Protection.
On NE1 to NE6, set the following parameters:
l Affiliated Subnet: 1
----End
686
20 Clock Solution
is transmitted by itself. If the clock signal is transmitted by itself, the node considers the clock
source unavailable. In this way, the timing loop is prevented.
In the network, two sets of external clock equipment are available and all the NEs are
synchronized with the primary external clock source. When the primary external clock
source cannot be traced, a clock switching is performed so that the NEs trace the secondary
clock source. As shown in Figure 20-18, NE1 to NE6 are synchronized with the primary
external clock source BITS1. The clock signal flow in the following figure shows the tracing
situation of the clock signal when the network is normal.
In the network, two sets of external clock equipment are available and all the NEs are
synchronized with the primary external clock source. When the primary external clock
source cannot be traced, a clock switching is performed so that the NEs trace the secondary
clock source. In addition, this link is long. To make sure that the downstream NEs can trace
clock signals with high quality after a long distance of transmission, clock compensation
is required on the NEs of the link. As shown in Figure 20-19, NE1 to NE18 are synchronized
with the primary external clock source BITS1. BITS3 is used for clock compensation.
BITS2 works as the protection clock source. The clock signal flow in the following figure
shows the tracing situation of the clock signal when the network is normal.
Issue 03 (2012-11-30)
687
20 Clock Solution
NE1
11
NE2
NE6
11
11
NE3
11
NE5
NE4
BITS2
BITS3
NE1
BITS1 Clock Source
8-Slot Clock Source
Internal Clock Source
11
11
11
BITS2
11
11
11
NE2
NE8
NE9
NE10
NE17
NE18
Issue 03 (2012-11-30)
688
20 Clock Solution
Service Planning
In the case of the ring network:
l
all the NEs are divided to clock subnet 1. The mode of all external clock equipment is 2
Mbit/s, and the timeslot is SA4.
The two external clock devices provide clock signals of the same level.
In the case of the long chain, all the NEs are divided to clock subnet 1. The mode of all external
clock equipment is 2 Mbit/s, and the timeslot is SA4.
Procedure
Step 1 Configure the clock source priority for each NE. For details, see Configuring NE Clock Sources.
On NE1, set the following parameters:
l Clock Source: external clock source 1 that connects to the BITS1 equipment, optical interface
that connects to the upstream NE through the line board in slot 8, optical interface that
connects to the downstream NE through the line board in slot 11, internal clock source
l External Clock Source Mode: 2 Mbit/s
l Synchronous Status Byte: SA4
On NE2 and NE3, set the following parameters:
l Clock Source: optical interface that connects to the upstream NE through the line board in
slot 8, optical interface that connects to the downstream NE through the line board in slot
11, internal clock source
On NE4, set the following parameters:
l Clock Source: external clock source 1 that connects to the BITS2 equipment, optical interface
that connects to the upstream NE through the line board slot 8, optical interface that connects
to the downstream NE through the line board in slot 11, internal clock source
l External Clock Source Mode: 2 Mbit/s
l Synchronous Status Byte: SA4
On NE5 and NE6, set the following parameters:
l Clock Source: optical interface that connects to the upstream NE through the line board in
slot 11, optical interface that connects to the downstream NE through the line board in slot
8, internal clock source
Step 2 Configure the affiliated clock subnet of each NE. For details, see Configuring the Clock Source
Protection.
On NE1 to NE6, set the following parameters:
l Affiliated Subnet: 1
Step 3 Enable the extended SSM protocol for each NE. For details, see Configuring the Clock Source
Protection.
On NE1 to NE6, set the following parameters:
Issue 03 (2012-11-30)
689
20 Clock Solution
The clock source ID can be set to any value within the permitted range.
----End
Procedure
Step 1 Configure the clock source priority for each NE. For details, see Configuring NE Clock Sources.
Set the following parameters for NE1:
l Clock Source: external clock source 1 that connects to the BITS1 equipment, optical interface
that connects to the upstream NE through the line board in slot 8, internal clock source
l External Clock Source Mode: 2 Mbit/s
l Synchronous Status Byte: SA4
Set the following parameter for NE2 to NE8 and NE10 to NE17:
l Clock Source: optical interface that connects to the upstream NE through the line board in
slot 11, optical interface that connects to the downstream NE through the line board in slot
8, internal clock source
Set the following parameters for NE9:
l Clock Source: external clock source 1 that connects to the BITS3 equipment, optical interface
that connects to the upstream NE through the line board in slot 11, optical interface that
connects to the downstream NE through the line board in slot 8, internal clock source
l External Clock Source Mode: 2 Mbit/s
l Synchronous Status Byte: SA4
Set the following parameters for NE18:
l Clock Source: external clock source 1 that connects to the BITS2 equipment, optical interface
that connects to the upstream NE through the line board in slot 11, internal clock source
l External Clock Source Mode: 2 Mbit/s
l Synchronous Status Byte: SA4
Step 2 Configure the affiliated clock subnet of each NE. For details, see Configuring the Clock Source
Protection.
Issue 03 (2012-11-30)
690
20 Clock Solution
The clock source ID can be set to any value within the permitted range.
Step 5 Configure the priority table for the 2M phase-locked clock sources of NE9.
1.
In the NE Explorer, select NE9 and choose Configuration > Clock > Physical Layer
Clock > Clock Source Priority from the Function Tree. In the pane on the right side, click
the Priority Table for Phase-Locked Sources of 1st External Clock Output tab.
2.
Click Create. In the Add Clock Source dialog box that is displayed, select the clock
sources for the boards in slot 1 and slot 3, and then click OK.
3.
Click Apply. In the Operation Result dialog box that is displayed, click Close.
Step 6 Configure the tracing mode of the 2M phase-locked loop for NE9. For details, see Configuring
the Phase-Locked Source for External Clock Output.
Set the following parameters:
l External Clock Output Mode when 2M Output Synchronization Source Is Invalid: Shut off
l Input and Output Impedance of External Clock Source 1: Consistent with the same parameter
on the BITS3 equipment.
l Input and Output Mode of External Clock Source 1: Consistent with the same parameter on
the BITS3 equipment.
----End
20.1.10 Troubleshooting
Following a certain troubleshooting flow can facilitate fault locating and rectification.
Issue 03 (2012-11-30)
691
20 Clock Solution
Symptom
When the network operates normally, the clock synchronization path is interrupted and the clock
protection switching fails. As a result, a large number of pointer justifications occur on the related
NEs
Impact on System
The failure in the clock protection switching may cause pointer justifications and service
interruptions.
Possible Causes
l
Cause 3: The configuration of the clock protection switching protocol of the entire network
is incorrect.
Cause 6: The extended synchronization status message (SSM) protocol is disabled or the
clock ID of the clock source is absent.
U2000
Procedure
Step 1 Cause 1: The fibers connections on the board are incorrect. As a result, the protection
switching fails.
1.
See the protection principles to check whether the fibers connections at the faulty point are
correct.
If...
Then...
Step 2 Cause 2: The configuration of the clock tracing mode of the NE is incorrect. As a result,
the protection switching fails.
1.
Issue 03 (2012-11-30)
See the protection principles to check whether the clock tracing mode of the NE is correct.
692
20 Clock Solution
If...
Then...
Step 3 Cause 3: The configuration of the clock protection switching protocol of the entire network
is incorrect. As a result, the protection switching fails.
1.
2.
Check whether the related NEs are added to the clock protection subnets.
If...
Then...
Check whether the clock protection switching protocol of related NEs is enabled.
If...
Then...
Step 4 Cause 4: The configuration of the external clock source of the NE is incorrect. As a result,
the protection switching fails.
1.
Issue 03 (2012-11-30)
Then...
693
2.
3.
20 Clock Solution
If...
Then...
Check whether the external clock source carries the SSMB information.
If...
Then...
Check whether the external clock source is configured with the s1 byte correctly.
If...
Then...
Step 5 Cause 5: The hardware is faulty. As a result, the protection switching fails.
1.
2.
3.
Check whether the services are restored. If the services are not restored, check whether the
fault is due to other causes.
Step 6 Cause 6: The extended SSM protocol is disabled or the clock ID of the clock source is
absent.
1.
Then...
----End
Issue 03 (2012-11-30)
694
20 Clock Solution
Related Information
In the case of clock protection, the direction of each NE clock source must match the fibers
connections. That is, the eastbound/westbound fibers must be connected correctly. When the
clock protection fails, check whether the fiber connections of each NE on the entire network
match the settings of the clock source.
CLK_NO_TRACE_MODE
EXT_SYNC_LOS
FSELECT_STG
LTI
R_LOC
S1_SYN_CHANGE
SYNC_F_M_SWITCH
SYNC_LOCKOFF
OOL
SYN_BAD
SYNC_C_LOS
Issue 03 (2012-11-30)
695
20 Clock Solution
AUPJCHIGH
AUPJCLOW
AUPJCNEW
Issue 03 (2012-11-30)
Field
Value Range
Description
NE Name
696
20 Clock Solution
Field
Value Range
Description
NE Synchronous Clock
Output<2M Phase-Locked
Source>
Issue 03 (2012-11-30)
Output Impedance of
External Clock Source 1
Output Impedance of
External Clock Source 2
2 Mbit/s, 2 MHz
2 Mbit/s, 2 MHz
697
20 Clock Solution
Field
Value Range
Description
Table 20-6 lists the parameters for configuring the 2M phase-locked source attributes of an
external clock.
Table 20-6 Parameters for configuring the 2M phase-locked source attributes of an external
clock
Field
Value Range
Description
2M Phase-Locked Source
Number
Open, Close
2 MHz, 2 Mbit/s
Issue 03 (2012-11-30)
698
20 Clock Solution
Field
Value Range
Description
2M Phase-Locked Source
Fail Condition
Issue 03 (2012-11-30)
699
20 Clock Solution
Field
Value Range
Description
2M Phase-Locked Source
Fail Action
2M Output S1 Byte
Unavailable, Send AIS, Shut
Down Output
Value Range
Description
NE Name
Auto-Revertive, NonRevertive
Issue 03 (2012-11-30)
700
20 Clock Solution
Field
Value Range
Description
0 to 12
Default value: 5
Yes, No
Default value: No
Yes, No
Default value: No
Yes
Issue 03 (2012-11-30)
701
20 Clock Solution
Field
Value Range
Description
CV Threshold-Crossing
Generated
Yes, No
CV Threshold
Yes, No
Effective Status
Valid, Invalid
Lock Status
Lock, Unlock
Specifies whether a
switching operation is
allowed.
Click A.27.15 Lock Status
(Clock) for more
information.
Switching Source
Switching Status
Issue 03 (2012-11-30)
702
20 Clock Solution
Table 20-8 Parameters for managing the quality and status of a clock
Field
Value Range
Description
NE Name
S1 Byte Synchronization
Quality Information
For example, NA
For example, NA
Issue 03 (2012-11-30)
703
20 Clock Solution
Field
Value Range
Description
Synchronous Source
Unknown Synchronization
Quality, G.811 Clock Signal,
G.812 Transit Clock Signal,
G.812 Local Clock Signal, G.
813 SDH Equipment Timing
source (SETS) Signal, Do
Not Use For
Synchronization, Automatic
Extraction, NA
Issue 03 (2012-11-30)
704
20 Clock Solution
Value Range
Description
Retiming Mode
705
20 Clock Solution
The clock synchronization refers to the frequency synchronization. In the case of clock
synchronization, the frequencies of the signals trace the reference frequency. The start time
of the signals can be different.
Clock Models
The equipment supports two models of IEEE 1588v2 clock:
l
Ordinary clock (OC): The OC equipment provides only one port that supports the extraction
of the IEEE 1588v2 packets. The OC equipment can work as the clock equipment that is
synchronized with the upstream clock. The OC equipment obtains the clock packets through
the port that supports the extraction of the IEEE 1588v2 packets and recovers the clock.
The OC equipment can also work as the master clock equipment. In this case, the OC
equipment accesses the external time through the external clock interface and transmits the
external clock to the downstream node through the port that supports the extraction of the
IEEE 1588v2 packets.
Boundary clock (BC): The BC equipment provides several ports that support the extraction
of the IEEE 1588v2 packets. The BC equipment can also work as the master clock
equipment or the slave clock equipment. The BC equipment transmits the clock packets to
the downstream node through several ports whereas the OC equipment transmits the clock
packets to the downstream node only through one port.
Clock Architecture
Figure 20-20 shows the architecture of the IEEE 1588v2 clock.
Issue 03 (2012-11-30)
706
20 Clock Solution
BC
BC
Master
Slave
BC
Slave
Master Master
OC
Slave
OC
Master
Slave
Issue 03 (2012-11-30)
707
20 Clock Solution
Six bytes
DMAC
Six bytes
SMAC
Two bytes
Ethernet type
Ethernet header
44 to 64 bytes
Protocol type
Four bytes
FCS
Six bytes
DMAC
Six bytes
SMAC
Ethernet header
Four bytes
Two bytes
VLAN
Ethernet type
44 to 64 bytes
Protocol type
Four bytes
Issue 03 (2012-11-30)
FCS
708
20 Clock Solution
6 Byte
DMAC
6 Byte
SMAC
2 Byte
EthernetType
12 Byte
IP header
4 Byte
SA_IP
4 Byte
DA_IP
2 Byte
SPN
2 Byte
DPN
Ethernet header
IP header
UDP header
2 Byte
UDP _Len
2 Byte
UDP_checksum
Payload
44-64 Byte
4 Byte
1588 payload
FCS
20.2.3 Specifications
This section describes the capacity of the OptiX OSN equipment for supporting IEEE 1588v2
time and clock synchronization.
Table 20-10 provides specifications associated with IEEE 1588v2 time and clock
synchronization.
Issue 03 (2012-11-30)
709
20 Clock Solution
Table 20-10 Specifications associated with IEEE 1588v2 time and clock synchronization
Item
Specifications
Support capability
Number of ports: 16
30 ns
1 us
IEEE 1588v2: IEEE Standard for a Precision Clock Synchronization Protocol for
Networked Measurement and Control Systems
20.2.5 Availability
The IEEE 1588v2 clock synchronization function requires the support of the applicable
equipment, boards, and software.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
710
20 Clock Solution
Hardware Support
Issue 03 (2012-11-30)
Classifica
tion of the
Boards
Applicabl
e Board
Applicable Version
Applicable Equipment
Packet
processing
board
N1PEG8
Packet
processing
board
N2PEX1
Packet
processing
board
N1PEX2
Packet
processing
board
R1PEFS8
Packet
processing
board
Q1PEGS2
Packet
processing
board
R1PEGS1
Packet
processing
board
R1PEF4F
Packet
processing
board
TNN1EX2
Packet
processing
board
TNN1EG8
Packet
interface
board
N1PEFF8
Packet
interface
board
N1PETF8
Packet
interface
board
TNN1ETM
C
Auxiliary
board
N1AUX
711
Issue 03 (2012-11-30)
20 Clock Solution
Classifica
tion of the
Boards
Applicabl
e Board
Applicable Version
Applicable Equipment
Auxiliary
board
T1EOW
Auxiliary
board
R1EOW
Crossconnect
board and
system
control
board
N1PSXCS
Crossconnect
board and
system
control
board
N2PSXCS
A
Crossconnect
board and
system
control
board
T1PSXCS
A
Crossconnect
board and
system
control
board
R1PCXLN
Crossconnect
board and
system
control
board
N4GSCC
Crossconnect
board and
system
control
board
N6GSCC
712
20 Clock Solution
Classifica
tion of the
Boards
Applicabl
e Board
Applicable Version
Applicable Equipment
Crossconnect
board and
system
control
board
TNN1PSX
CS
Crossconnect
board and
system
control
board
N3PSXCS
A
NOTE
The external time input and output interface of an OptiX OSN 7500 II is on the TNN1PSXCS board, and that
of an OptiX OSN 1500, OptiX OSN 3500, or OptiX OSN 7500 is on the auxiliary board.
Issue 03 (2012-11-30)
Applicable
Object
Remarks
IEEE 1588v2
time and clock
synchronizatio
n
Frequency
synchronizatio
n
713
20 Clock Solution
Applicable
Object
Remarks
Time
synchronizatio
n
Clock working
modes
IEEE 1588v2
time and clock
synchronizatio
n and SDH
clock
synchronizatio
n
TNN1EG8 and
N1PEG8
Issue 03 (2012-11-30)
Applicable
Object
Remarks
IEEE 1588v2
time and clock
synchronizatio
n
IEEE 1588v2
time and clock
synchronizatio
n
714
20 Clock Solution
Maintenance Principles
Applicable
Object
Remarks
IEEE 1588v2
time and clock
synchronizatio
n
BMC Algorithm
The best master clock (BMC) algorithm consists of the following parts:
l
Data set comparison algorithm: determines the optimal clock source according to the
quality of the clock of the Announce packets that are received at each port.
Topology comparison algorithm: During the data set comparison algorithm, if the GM IDs
of the Announce packets received by two clock sources are the same, the IEEE 1588v2
protocol considers that Announce packets received by the two clock sources are from the
same reference clock source. In this case, the optimal clock source can be specified by
comparing the topological routes. Hence, the topology comparison algorithm is used. The
topology comparison algorithm is related to the networking topology. It determines the
optimal clock source according to the number of hops across which the Announce packets
travel. When the hops across which the Announce packet that contains the clock source
travels are fewer, the quality of the clock source is higher.
State decision algorithm: determines the port state according to the data set comparison
algorithm result and topology comparison algorithm result.
Port State
The PTP defines nine port states, namely, PTP_INITIALIZING, PTP_FAULTY,
PTP_DISABLED, PTP_LISTENING, PTP_PRE_MASTER, PTP_MASTER, PTP_PASSIVE,
PTP_SLAVE, and PTP_UNCALIBRATED. The PTP_INITIALIZING, PTP_FAULTY, and
Issue 03 (2012-11-30)
715
20 Clock Solution
PTP_MASTER: The clock port that is in the PTP_MASTER state provides the clock source
to the downstream equipment.
PTP_PASSIVE: The clock port that is in the PTP_PASSIVE state is not the master clock
port and is not synchronized with the master clock.
PTP_SLAVE: The clock port that is in the PTP_SLAVE state works as a downstream port
and receives the clock information from the upstream port. The port that is in the
PTP_SLAVE state is synchronized with the clock port that is in the PTP_MASTER state
on the same time path.
Receives and authenticates the Announce packets from other clock ports.
2.
3.
Updates the port data sets according to the decision point that is used in the state decision
algorithm and is in the recommended state.
4.
Determines the actual master/slave state of the port by using the state machine associated
with the port and establishes the master/slave relationships, according to the recommended
state and state decision event.
This topic considers the following ring network that comprises three NEs as an example to
describe the process of establishing the master/slave relationships.
As shown in Figure 20-24, node A, node B, and node C construct a ring network. Node A is
connected to node B through port A1 and is connected to node C through port A2. Node B is
connected to node A through port B2 and is connected to node C through port B1. Node C is
connected to node A through port C1 and is connected to node B through port C2. Node A is
assumed as the reference clock source.
Figure 20-24 Working principle diagram of the BMC algorithm
A
A2
A1
M
B2
S
M
B
B1
Issue 03 (2012-11-30)
C1
C2
716
20 Clock Solution
Node A is assumed as the reference clock source. Hence, port A1 and port A2 change to
the master state first.
After port B1 and port B2 receive the IEEE 1588v2 packets, the equipment compares the
GM IDs contained in the IEEE 1588v2 packets. After determining that the GM IDs are the
same, the equipment compares the routes of the packets received by port B1 and port B2.
After determining that the route along which the packets travel to port B2 is shorter than
the route along which the packets travel to port B1, the equipment selects port B2 as the
slave clock port and selects port B1 as the master clock port.
After port C1 and port C2 receive the IEEE 1588v2 packets, the equipment compares the
GM IDs contained in the IEEE 1588v2 packets. After determining that the GM IDs are the
same, the equipment compares the routes of the packets received by port C1 and port C2.
After determining that the route along which the packets travel to port C1 is shorter than
the route along which the packets travel to port C2, the equipment selects port C1 as the
slave clock port and selects port C2 as the master clock port. In addition, the NE ID of node
B (namely, the opposite NE of port C2) is smaller than the NE ID of node C. Hence, port
C2 changes to the passive state.
Delay_Req and Pdelay_Req packets: used by the requesting clock to generate the delay
measurement request. The Delay_Req is used in the case of E2E TC model and the
Pdelay_Req is used in the case of P2P TC model.
Delay_Resp and Pdelay_Resp packets: used to respond to the delay measurement request
generated by the requesting clock. The Delay_Resp is used in the case of E2E TC model
and the Pdelay_Resp used in the case of P2P TC model.
Management packet: used to query and update the PTP data sets for clock maintenance and
also used to customize, initialize, and troubleshoot a PTP system. The Management packet
is used between the management node and the clock equipment.
Signaling packet: used between the clock equipment to realize communication for special
purposes. For example, the Signaling packet can be used to negotiate the rate of the unicast
messages between the master clock and the slave clock.
The Sync, Delay_Req, and Delay_Resp packets are used to generate and transfer the timing
information that is synchronized by using the Delay_Req/Delay_Resp mechanism and is
required by the OC and BC equipment.
The Pdelay_Req and Pdelay_Resp packets are used to measure the link delay between the two
clock ports that realize the Pdelay mechanism. The OC and BC equipment is synchronized by
using the timing information in the Sync packets and the link delay information.
NOTE
The Pdelay mechanism measures the point-to-point transmission time, namely, link delay, between two
communication ports that support the Pdelay mechanism.
Issue 03 (2012-11-30)
717
20 Clock Solution
The E2E port and P2P port use different delay measurement mechanisms and hence cannot be
used at the same time on the same communication path. That is, two neighboring ports on the
same time path must be of the same delay mechanism, namely, E2E or P2P. When the time path
is changed, the E2E mechanism recalculates the end-to-end resident time and the transmission
time delay whereas the P2P mechanism calculates the transmission delay and resident time on
the changed time path only.
Master clock
t1
t-ms
Data obtained
by the slve clock
Sync
t2
t1, t2
t3
t1, t2, t3
t-sm
t4
Delay_Req
Delay_Resp
The master clock transmits a synchronization packet to the slave clock. The master clock
adds timestamp t1 into the synchronization packet and the slave clock extracts timestamp
t1 from the synchronization packet and records the receipt time of the Sync packet, namely,
t2.
2.
The slave clock sends the Delay_Req message to the master clock. The slave clock records
the accurate transmit time of the Delay_Req message, namely, t3. The master clock records
the accurate receipt time of the Delay_Req message, namely, t4.
Issue 03 (2012-11-30)
718
3.
20 Clock Solution
The master clock returns a Delay_Resp message to the slave clock. The Delay-Resp
message contains the accurate receipt timestamp t4 of the Delay_Req message. The slave
clock can calculate the transmission delay (delay) and time offset (offset) between the
master and slave clocks according to the timestamps t1, t2, t3, and t4. The formula for
calculating the t_ms values of the master and slave nodes are as follows: t_ms = t2 - t1 =
offset + ms_delay and t_sm = t4 - t3 = -offset + sm_delay. If the t_ms values of the master
and slave nodes are the same, that is, if ms_delay = sm_delay = delay, offset = (t_ms t_sm)/2 = (t2-t1-t4+t3)/2 and delay = (t_ms+t_sm)/2 = (t2 - t1 + t4 - t3)/2.
Responding
clock
Requesting
clock
t1
t-ms
Pdelay_Req
t2
t3
t-sm
t1, t3-t2, t4
t4
Pdelay_Resp
(including t3-t2)
The requesting clock sends a delay request to the responding clock. The requesting clock
adds timestamp t1 into the delay request packet and the responding clock extracts timestamp
t1 from the delay request packet and records the receipt time of the delay request packet,
namely, t2.
2.
The responding clock returns a delay response packet to the requesting clock. The delay
response packet contains the value t3 - t2, namely, the difference between the accurate time
that the responding clock sends the delay response packet and the receipt time of the delay
request packet. The requesting clock records the accurate receipt time of the delay response
packet, namely, t4.
Issue 03 (2012-11-30)
719
3.
20 Clock Solution
The requesting clock can calculate the transmission delay (delay) of the master responding
clock according to the values of t1, t3 - t2, and t4. The calculation result shows that the path
delay of the requesting clock is the same as the path delay of the responding clock, namely,
delay = ms_delay = sm_delay =[(t2 - t1) + (t4 - t3)]/2 = [(t4 - t1) - (t3 - t2)]/2.
PRC
~
Clock information
Clock information
Intermediate equipment
for time synchronization
Intermediate equipment
for time synchronization
B
10 Gbit/s ring
Convergence layer
Intermediate equipment
for time synchronization
Intermediate equipment
for time synchronization
2.5 G chain
Intermediate equipment
for time synchronization
622 Mbit/s
chain
155 Mbit/s
chain
Intermediate equipment
for time synchronization
G
Intermediate equipment
for time synchronization
Access layer
Issue 03 (2012-11-30)
720
20 Clock Solution
The intermediate NEs A and B, which comply with IEEE 1588v2, receive the clock signals from
the clock source PRC through the external clock interfaces and transmit the clock signals to NEs
E, F, and G, and then to the wireless equipment.
Prerequisites
You must be an NM user with NE operator authority or higher.
Precautions
CAUTION
The clock frequency synchronization can be realized by configuring the IEEE 1588 V2 time
synchronization or the traditional clock synchronization. When the IEEE 1588 V2 time
synchronization and the traditional clock synchronization are configured at the same time, only
the IEEE 1588 V2 time synchronization works. To realize the frequency synchronization by
configuring the IEEE 1588 V2 clock, you need to set Protocol Frequency Tracing to
Enabled.
Procedure
Step 1 In the NE Explorer, select the NE and choose Configuration > Clock > PTP Clock > Basic
Clock Configuration from the Function Tree.
Step 2 In the window on the right, click the NE Parameter Configuration tab and set the relevant
parameters.
Issue 03 (2012-11-30)
721
20 Clock Solution
NOTE
In this window, you can click Query to query parameters of NE Parameter Configuration.
Step 3 Click Apply. Then, the Operation Result dialog box is displayed, indicating that the operation
is successful.
Step 4 Click Close.
----End
20.2.9.2 Configuring the Encapsulation Type of IEEE 1588 V2 Packets on the Data
Board
You need to configure the encapsulation type of IEEE 1588 V2 packets so that a data board can
transmit the IEEE 1588 V2 packets.
Prerequisites
l
The data board that supports the transmission of the IEEE 1588 V2 packets must be created.
Procedure
Step 1 In the NE Explorer, select the NE and choose Configuration > Clock > PTP Clock > Basic
Clock Configuration from the Function Tree.
Step 2 Click the Configuration of the Protocol 1588 Packet on the Data Board tab in the window
on the right. Set Encapsulation Type, Whether to Detect the VLAN Tag, and VLAN ID for
the data board.
NOTE
The data ports support the IEEE 1588 V2 function only when working in full-duplex mode. If the port
mode is set to full duplex for the port on the user equipment and to auto-negotiation for the port on this
data board, auto-negotiation fails.
Step 3 Click Apply. Then, the Operation Result dialog box is displayed, indicating that the operation
is successful.
Step 4 Click Close.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Issue 03 (2012-11-30)
722
20 Clock Solution
Procedure
Step 1 In the NE Explorer, select the NE and choose Configuration > Clock > PTP Clock > Clock
Interface Configuration from the Function Tree.
Step 2 In the window on the right, click the External Clock Interface Configuration tab and set the
relevant parameters of the external clock interface for an NE.
CAUTION
l In this window, you can click Query to query the parameters in External Clock Interface
Configuration.
l Two BITSs are connected to a ring and the working/protection relationship exists between
the two BITSs. If the attribute settings of the two BITSs are the same, certain NEs on the
ring trace one BITS and the rest NEs trace the other BITS. That is, the working/protection
relationship does not exist between the two BITSs. To prevent this problem, set the attributes
of the two BITSs to different values.
l The OptiX OSN 7500 II supports the setting of Interface Mode of an external time interface.
Interface Mode includes External clock port, External time input port, and External
time output port.
Step 3 Click Apply. Then, the Operation Result dialog box is displayed, indicating that the operation
is successful.
Step 4 Click Close.
----End
Issue 03 (2012-11-30)
723
20 Clock Solution
Prerequisites
l
The board that can be configured as the clock source must be created.
Procedure
Step 1 In the NE Explorer, select the NE and choose Configuration > Clock > PTP Clock > Clock
Interface Configuration from the Function Tree.
Step 2 In the window on the right, click the Clock Source Priority Table tab. Click New.
Step 3 Optional: In the New Time Source dialog box that is displayed, select the time source and click
Confirm.
Step 4 Optional: In the Clock Source Priority Table tab, select the clock source to be deleted and
click Delete.
NOTE
Step 5 Click Apply. Then, the Operation Result dialog box is displayed, indicating that the operation
is successful.
Step 6 Click Close.
Step 7 Click Query in the Clock Source Priority Table tab to check the Switching Mode of Clock
Source ID currently traced, Main clock ID currently traced, and Time Resource.
Step 8 Click Close in the Operation Result dialog box that is displayed.
----End
Prerequisites
l
Issue 03 (2012-11-30)
724
20 Clock Solution
Procedure
Step 1 In the NE Explorer, select the NE and choose Configuration > Clock > PTP Clock > Clock
Interface Configuration from the Function Tree.
Step 2 In the window on the right, click the Clock Source Priority Table tab. Click Query.
Step 3 The Operation Result dialog box is displayed, Click Close.
----End
Prerequisites
l
The board that can be configured as the clock source must be created.
Procedure
Step 1 In the NE Explorer, select the NE and choose Configuration > Clock > PTP Clock > Clock
Interface Configuration from the Function Tree.
Step 2 In the window on the right, click the Clock Interface Configuration tab and set the relevant
parameters.
NOTE
Peer Clock Source ID to Be Received should be set to the same as the clock ID at the opposite end.
Step 3 Click Apply. Then, the Operation Result dialog box is displayed, indicating that the operation
is successful.
Step 4 Click Close.
----End
Issue 03 (2012-11-30)
725
20 Clock Solution
Configuration Networking
As shown in Figure 20-28, NE1 accesses the IEEE 1588v2 packets and transmits the IEEE
1588v2 packets to the wireless base station through NE3. After the wireless base station receives
the IEEE 1588v2 time information, the time of the wireless base station is synchronized with
the time of the entire network.
Figure 20-28 Networking diagram (transmission of the IEEE 1588v2 time)
1588 MASTER
RNC
NE1
NE2
1588 SLAVE
NE3
NodeB
Cable
Fiber
Issue 03 (2012-11-30)
726
20 Clock Solution
PIU
AUX
S
2
8
FAN
S S S S S S S S
1 1 1 1 1 1 1 1
1 2 3 4 5 6 7 8
PEG8
SL64
PEG8
S
1
0
PSXCS
S S S S S S S S S
1 2 3 4 5 6 7 8 9
FAN
PSXCS
FAN
GSCC
GSCC
PIU
S S S S S S S S S
1 2 2 2 2 2 2 2 2
9 0 1 2 3 4 5 6 7
Issue 03 (2012-11-30)
Parameter
NE1
NE2
NE3
Enabled
Enabled
Enabled
BC
BC
BC
52
146
146
Quality Accuracy of
the Clock Source
34
40
40
12
12
727
20 Clock Solution
Parameter
NE1
NE2
NE3
PTP
INTERNAL_OSCILLATOR
INTERNAL_OSCILLATOR
NE Time
Message Multicast
Mode
Multicast Mode
Multicast Mode
Multicast Mode
Time Calibration
Enabled
Enabled
Enabled
Protocol Frequency
Tracing
Enabled
Enabled
Enabled
Table 20-12 lists the parameters of External Clock Interface Configuration for NE1 and NE3.
Table 20-12 Parameters of the external clock interface
Issue 03 (2012-11-30)
Parameter
NE1
NE3
Enabled
Enabled
Enabled
Enabled
11
13
GPS
INTERNAL_OSCILLATOR
Clock Level
187
Clock Accuracy
32
45
255
255
Length(m)
Length(m)
100
1000
Adjustment Direction in
Receive Direction
Positive
Negative
Adjustment Mode in
Transmit Direction
Length(m)
Length(m)
728
20 Clock Solution
Parameter
NE1
NE3
Adjustment Value in
Transmit Direction
100
1000
Adjustment Direction in
Transmit Direction
Positive
Negative
rs232
rs232
rs422
rs422
Table 20-13 lists the values of Time Resource in Clock Source Priority Table for NE1, NE2,
and NE3.
Table 20-13 Clock source Priority table
NE
Time Resource
NE1
NE2
NE3
Table 20-14 lists the values of the parameters of Clock Interface Configuration for NE2 and
NE3.
Table 20-14 Clock port configuration table
Issue 03 (2012-11-30)
Time
Resource
Time
Port
Delay
Announce Packet
Receiving Timeout
Coefficient
Fiber
Deviation
Mode
Fiber
Deviatio
n
Directio
n
Fiber
Devia
tion
NE2-3-PEG8-1
(Port-1)
P2P
length(m)
positive
direction
NE2-15PEG8-1
(Port-1)
P2P
length(m)
positive
direction
NE3-3-PEG8-1
(Port-1)
P2P
length(m)
positive
direction
NE3-15PEG8-1
(Port-1)
P2P
length(m)
positive
direction
729
20 Clock Solution
Prerequisites
l
Procedure
Step 1 Configure NE1.
1.
2.
According to 20.2.9.3 Setting the Attributes of the External Clock Interface for an
NE, set the parameters in External Clock Interface Configuration of NE1. For detailed
configuration parameters, refer to Table 20-12.
3.
According to 20.2.9.4 Configuring the Clock Source Priority Table for an NE, set the
parameters in Clock Source Priority Table of NE1. For detailed configuration parameters,
refer to Table 20-13.
2.
According to 20.2.9.4 Configuring the Clock Source Priority Table for an NE, set the
parameters in Clock Source Priority Table of NE2. For detailed configuration parameters,
refer to Table 20-13.
3.
2.
According to 20.2.9.3 Setting the Attributes of the External Clock Interface for an
NE, set External Clock Interface Configuration of NE3. For detailed configuration
parameters, refer to Table 20-12.
3.
According to 20.2.9.4 Configuring the Clock Source Priority Table for an NE, set Clock
Source Priority Table of NE3. For detailed configuration parameters, refer to Table
20-13.
4.
----End
730
20 Clock Solution
Prerequisites
l
Context
When the IEEE 1588 V2 time synchronization and clock synchronization functions are normal,
each port of the clock source of the NE runs the BMC algorithm independently and automatically
evaluates the time packets received on each port of the clock source to determine the clock source
to be traced. When the clock information of the current clock tracing source is downgraded, the
NE runs the BMC algorithm and automatically switches to the clock source with the second
highest priority that is defined in the clock source priority table. This indicates that the IEEE
1588 V2 time synchronization and clock synchronization functions are normal.
Procedure
Step 1 In the NE Explorer, select the NE and choose Configuration > Clock > PTP Clock > Clock
Interface Configuration from the Function Tree.
Step 2 In the window on the right, click the Clock Source Priority Table tab. Click Query to query
the values of Clock source ID currently Traced and Main clock ID currently Traced.
Step 3 Optional: When the NE traces an external clock source, click the External Clock Interface
Configuration tab and set the Clock level, Clock accuracy, Clock source priority 1, Clock
source priority 2 parameters to larger values, thus downgrading the external clock source.
CAUTION
After the external clock source is downgraded, the NE switches to trace another clock source.
In this case, the time synchronization in the entire time domain is affected. Exercise caution
when no standby external clock source is configured for the time domain.
Step 4 Repeat Step 2. If the value of Clock source ID currently Traced or Main clock ID currently
Traced is changed, this indicates that the IEEE 1588 V2 time synchronization and clock
synchronization functions are normal.
Step 5 After the verification, restore the modified parameters to the original values.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Issue 03 (2012-11-30)
731
20 Clock Solution
Procedure
Step 1 According to 20.2.9.3 Setting the Attributes of the External Clock Interface for an NE, check
whether the parameter settings in External Clock Interface Configuration of the NE comply
with the network planning. For parameters of External Clock Interface Configuration, see
20.2.15.5 External Clock Interface Configuration.
Step 2 According to 20.2.9.4 Configuring the Clock Source Priority Table for an NE, check whether
the parameter settings in Clock Source Priority Table of the NE comply with the network
planning. Check whether the values of Clock source ID currently traced and Main clock ID
currently traced of the NE are correct. For parameters of Clock Source Priority Table, see
20.2.15.2 Clock Source Priority Table.
Step 3 According to 20.2.9.6 Configuring the Clock Interface, check whether the parameter settings
in Clock Interface Configuration of the NE comply with the network planning. For parameters
of Clock Interface Configuration, see 20.2.15.4 Clock Interface Configuration.
----End
20.2.13 Troubleshooting
This topic describes how to rectify the fault that the clock source cannot be automatically
switched to a standby clock source when the current tracing clock is downgraded.
Prerequisites
You must be an NM user with NE operator authority or higher.
Context
The possible causes of the failure of the clock source switching are as follows:
l
Issue 03 (2012-11-30)
732
20 Clock Solution
Figure 20-30 Flowchart for troubleshooting the failure of the automatic switching of the clock
source
Start
No
Whether
to switch to the standby
clock source?
Modify parameters
Yes
No
Yes
Whether the
standby clock source
is available?
No
Whether to
switch to the standby clock
source?
Yes
No
Yes
End
Procedure
Step 1 Cause 1: The parameter settings of the IEEE 1588v2 clock are incorrect.
1.
Check whether the settings of the parameters in NE Parameter Configuration are correct.
Ensure that PTP protocol status is set to Enabled.
2.
Check whether the standby clock source is set in Clock Source Priority Table. Ensure
that the standby clock source is set in Clock Source Priority Table.
3.
Optional: If the standby clock source is External clock source 2, check whether the setting
of External Clock Interface 2 is correct. Ensure that the parameters of External Clock
Interface 2 are set correctly.
Check current alarms to determine whether any alarms that are related to the standby clock
source exist. If yes, handle the alarms.
2.
Optional: If the standby clock source is an internal clock of an upstream NE, check whether
the settings of the parameters in NE Parameter Configuration of the upstream NE are
correct.
----End
Issue 03 (2012-11-30)
733
20 Clock Solution
Full Name
TIME_NO_TRACE_MODE
TIME_LOS
TIME_FORCE_SWITCH
EXT_TIME_LOC
TIME_NOT_SUPPORT
20.2.15.1 NE Parameters
You need to set the NE parameters for achieving the IEEE 1588v2 time synchronization and
clock synchronization.
Table 20-16 lists the NE parameters.
Table 20-16 NE parameters
Issue 03 (2012-11-30)
Field
Value Range
Description
Enabled, Disabled
734
20 Clock Solution
Field
Value Range
Description
Clock Working
Mode
BC, OC
For example,
00094e2d000000
00
NOTE
The parameter
value ranges from
0 to 0xfffffffe.
Issue 03 (2012-11-30)
0, 4 to 127
Quality Accuracy
of the Clock Source
32 to 49
0 to 65535
ATOMIC_CLOC
K, GPS,
TERRESTRIAL_
RADIO, PTP,
NTP,
HAND_SET,
OTHER,
INTERNAL_OS
CILLATOR
Clock Source
Priority 1
0 to 255
Clock Source
Priority 2
0 to 255
NE Time
None
None.
735
20 Clock Solution
Field
Value Range
Description
Message Multicast
Mode
Multicast Mode,
Partially Multicast
Mode
Time Calibration
Enabled, Disabled
Protocol Frequency
Tracing
Enabled, Disabled
Value
Description
Time Resource
Switching Mode
Clock source ID
currently traced
Main clock ID
currently traced
Issue 03 (2012-11-30)
736
20 Clock Solution
Table 20-18 Parameters associated with configuring the IEEE 1588v2 packet on the data board
Field
Value
Description
VLAN ID
Value
Description
Time Resource
E2E, P2P
NOTE
The N1PEX2 and N2PEX1 boards
do not support the P2P time port
delay.
Announce Packet
Receiving Timeout
Coefficient
Issue 03 (2012-11-30)
2 to 8
737
20 Clock Solution
Field
Value
Description
INITIALIZING, FAULTY,
DISABLED, LISTENING,
PRE_MASTER, MASTER,
PASSIVE, UNCALIBRATED,
SLAVE
time(ns), length(m)
Fiber Deviation
Direction
Fiber Deviation
0 to 14562
0 to 65529
NOTE
To configure the IEEE 1588v2
clock synchronization, you must
set this parameter to -7 or a value
less than -7 for sync packets.
Issue 03 (2012-11-30)
738
20 Clock Solution
Issue 03 (2012-11-30)
Field
Value
Description
Enabled, Disabled
Enabled, Disabled
0 to 0xffffffffffffffff
ATOMIC_CLOCK,
GPS,
TERRESTRIAL_RADIO, PTP, NTP,
HAND_SET,
OTHER,
INTERNAL_OSCILLATOR
Clock Level
Clock Accuracy
32 to 49
0 to 255
0 to 255
Time(ns), Length(m)
739
20 Clock Solution
Field
Value
Description
Adjustment Direction in
Receive Direction
Positive, Negative
Adjustment Mode in
Transmit Direction
Time(ns), Length(m)
Adjustment Value in
Transmit Direction
Adjustment Direction in
Transmit Direction
Positive, Negative
rs232, rs442
ttl, rs442
20.3.1 Introduction
The synchronous Ethernet clock is a technology that extracts the clock from the serial bit stream
on the Ethernet line, and transmits data through the extracted clock to realize the transfer of
clocks.
Issue 03 (2012-11-30)
740
20 Clock Solution
Definition
The synchronous Ethernet clock is a technology of frequency synchronization over the physical
layer. The system directly extracts the clock signal from the serial bit stream on the Ethernet
line, and transmits the data to each board by using the clock signal to realize the transfer of clock
information.
Purpose
As the network is increasingly based on the Ethernet transfer technology, the large-scale network
at the carrier-class level requires the synchronous Ethernet to transmit the clock and introduces
the networkwide synchronous timing transmission idea of the SDH system to the Ethernet
design. Thus, the clock signal can be transmitted from the core to the edge by using the Ethernet
physical layer, to provide ensured timing for various real-time services.
SSM Byte
The synchronous status message (SSM) byte is located on the twenty-eighth byte of the Ethernet
frame. The higher four bits of this byte represent the clock ID, and the lower four bits of this
byte represent the clock quality. These bits represent the clock quality in the same way as the
S1 byte of the SDH network. That is, when the value of the lower four bits of the SSM byte is
smaller, the clock source quality is higher. Table 20-21 lists the represented clock quality.
Table 20-21 Mapping between the SSM and the clock quality
Issue 03 (2012-11-30)
0000
0001
Reserved
0010
0011
Reserved
0100
SSU-Aa
0101
Reserved
0110
Reserved
0111
Reserved
1000
SSU-Ba
1001
Reserved
1010
Reserved
1011
741
20 Clock Solution
1100
Reserved
1101
Reserved
1110
Reserved
1111
a: The "G.812 Transit Exchange" and "G.812 Local Clock" terms are used in the previous version
of ITU-T Recommendations. In the new version of ITU-T G.812, the clock definition is changed
to synchronization supply unit (SSU) that is available in types A and B.
The clock ID is bits 1-4 of the SSM byte, and takes a value of 0x0-0xf. Basically, the clock ID
is used to distinguish the clock information about the local node from the clock information
about another node, to prevent a node from tracing the clock signal that is locally transmitted
and comes from the negative direction. In this manner, a timing loop is prevented.
When the clock ID is 0, it indicates that the clock ID is invalid. Thus, the clock ID is 0 when the
ID is not set for a clock source. When the extended SSM protocol is enabled, the NE does not
select the clock source whose ID is 0 as the current clock source.
The clock ID is a tag that is set for a reference timing source. The clock sources at the same
quality level that carry different IDs mean different timing signals and are the same in priority
level and other aspects.
SSM Protocol
SSM protocols are classified into the standard SSM protocol and the extended SSM protocol.
l
In the case of the extended SSM protocol, Huawei introduces the idea of the clock ID based
on the standard SSM protocol. The extended SSM protocol uses bits 1-4 of the SSM byte
as the unique ID of a clock source, and transports the clock ID with an SSM. After a node
receives the SSM byte, the node verifies the clock ID (bits 1-4) to determine whether the
clock is locally transmitted. If yes, the node considers the clock source cannot be used.
Thus, a timing loop is prevented. The extended SSM protocol is mainly used for the
interconnection of transmission devices of Huawei.
20.3.3 Specifications
This section describes the capacity of the OptiX OSN equipment for supporting synchronous
Ethernet clocks.
Table 20-22 provides specifications associated with synchronous Ethernet clocks.
Issue 03 (2012-11-30)
742
20 Clock Solution
Specifications
Support capability
Number of ports: 64
1 ppm
Synchronization precision
ITU-T G.812: Timing requirements of slave clocks suitable for use as node clocks in
synchronization networks
20.3.5 Availability
The synchronous Ethernet clock feature requires the support of the applicable equipment and
boards.
Version Support
Issue 03 (2012-11-30)
Product Name
Applicable Version
V100R009C03
T2000
V200R007C03
Product Name
Applicable Version
U2000
Product Name
Applicable Version
U2000
743
20 Clock Solution
Hardware Support
Issue 03 (2012-11-30)
Board
Type
Whether to
Support the
Extended
SSM Protocol
Applicable
Equipment
Version
Applicable Equipment
N1PEG16
Yes
V100R009C03
and later
N1PEX1
Yes
V100R009C03
and later
N1PETF8
Yes
V100R009C03
and later
R1PEFS8
Yes
V100R009C03
and later
Q1PEGS2
Yes
V100R009C03
and later
R1PEGS1
Yes
V100R009C03
and later
N1PEG8
Yes
V200R011C00
and later
N2PEX1
Yes
V200R011C00
and later
N1PEX2
Yes
V200R011C00
and later
N1PEFF8
Yes
V200R011C00
and later
R1PEF4F
Yes
V200R011C00
and later
TNN1EX
2
Yes
V200R011C01
and later
TNN1EG
8
Yes
V200R011C01
and later
TNN1ET
MC
Yes
V200R011C01
and later
TNN1EFF
8
Yes
V200R011C02
and later
744
20 Clock Solution
Remarks
Synchronous
Ethernet clock
Synchronous
Ethernet clock
Synchronous
Ethernet clock
TNN1EG8 and
N1PEG8
Maintenance Principles
None.
20.3.7 Principles
This topic describes the principles for realizing the synchronous Ethernet clock function.
745
20 Clock Solution
ITU-OUI
ITU Subtype
Reserved
TLV
SSM
64-1490
4
MAC Source Address: indicates the MAC address of the port at the transmit end.
Slow Protocol Ethertype: is always set to 0x8809, which indicates the type of the slow
protocol.
Slow Protocol Subtype: is always set to 0x0A, which indicates that the type of the slow
protocol is the synchronous Ethernet clock.
ITU-OUI: is always set to 0x00-19-A7, which indicates the unique ID of the protocol type
of the synchronous Ethernet clock.
ITU-Subtype: is always set to 0x0001, which indicates the subtype of the protocol type of
the synchronous Ethernet clock.
Version and Event Flag: The higher four bits of this byte indicate the protocol version; the
fifth bit indicates the event flag (When the value of the SSM byte is changed, the value of
this bit is set to 1); the lower three bits are reserved.
SSM: The first four bits indicate the clock ID, and the last four bits indicate the quality of
the clock source information.
Issue 03 (2012-11-30)
746
20 Clock Solution
RNC
BITS
GE ring
FE
Clock
information
NodeB
BSC
RNC
BITS
Optical fiber
Clock information
The clock signal is transmitted between NEs over the synchronous Ethernet as follows:
1.
The BITS transmits the clock signal to the NE on the synchronous Ethernet through the
external clock interface.
2.
Through the synchronous Ethernet interface, the data board that supports the synchronous
Ethernet clock function extracts the clock signal from the serial bit stream on the Ethernet
physical-layer line and then selects a clock source.
Issue 03 (2012-11-30)
747
3.
4.
20 Clock Solution
After the NE extracts the clock, the clock phase-locked loop traces the optimal clock source
and locks its phase. Then, the system clock is synchronized with the optimal clock source.
Finally, the NE transmits the system clock to the NodeB, BSC, or RNC through the
synchronous Ethernet interface of the data board. In this manner, the clock signal is
transmitted downstream on the synchronous Ethernet.
Clock source
PRC
~
Clock
information
Clock
synchronization
intermediate
equipment
2.5 Gbit/s
chain
622 Mbit/s
chain
Clock
synchronization
intermediate
equipment
B
GE
ring
Clock
synchronization
intermediate
equipment
Clock
synchronization
intermediate
equipment
Clock
information
Clock
synchronization
intermediate
equipment
155 Mbit/s
chain
Clock
synchronization
intermediate
equipment
Convergence
layer
Clock
synchronization
intermediate
equipment
Clock
synchronization
intermediate
equipment
Clock
synchronization
slave equipment
Clock
synchronization
slave equipment
Access
layer
Clock
synchronization
slave equipment
OptiX OSN equipment
Issue 03 (2012-11-30)
748
20 Clock Solution
Prerequisites
You must be an NM user with NE operator authority or higher.
Background Information
To achieve clock protection, you must configure at least two traceable clock sources for an NE.
After you configure clock sources for all NEs, query the network-wide clock tracing status again.
Procedure
Step 1 In the NE Explorer, select the required NE from the Object Tree and then choose
Configuration > Clock > Physical Layer Clock > Clock Source Priority from the Function
Tree.
Step 2 Click Query to query the existing clock sources.
Step 3 Click Create. In the Add Clock Source dialog box that is displayed, select a new clock source.
Click OK.
Step 4 Optional: If an external clock source is selected, select External Clock Source Mode based
on the type of external clock signals. In the case of 2 Mbit/s clocks, specify Synchronous Status
Byte that functions to transfer the clock quality information.
Step 5 Select a clock source and click
or
NOTE
Internal clock sources have the lowest priority due to their low precision.
Issue 03 (2012-11-30)
749
20 Clock Solution
If the clock tracing relationship changes because of any change in the clock source, the Warning dialog
box is displayed, asking you whether to refresh the clock tracing relationship. In this case, click OK.
----End
Prerequisites
You must be an NM user with NE operator authority or higher.
Procedure
Step 1 In the NE Explorer, select the required NE from the Object Tree and then choose
Configuration > Clock > Physical Layer Clock > Clock Subnet Configuration from the
Function Tree.
Step 2 Click the Clock Subnet tab. Then, click Query to query the parameter settings.
Step 3 Select Start Standard SSM Protocol or Start Extended SSM Protocol.
NOTE
Select the same type of SSM protocol for the NEs that belong to the same clock subnet.
Step 4 Set a subnet ID for the clock subnet to which the NE belongs.
NOTE
The NEs that trace the same clock source are allocated with the same clock subnet ID.
Step 5 Optional: If the extended SSM protocol is enabled, you need to set Clock Source ID.
Step 6 Click Apply. In the Operation Result dialog box that is displayed, click Close.
----End
750
20 Clock Solution
SSM protocol. This topic considers the extended SSM protocol as an example to describe how
to configure synchronous Ethernet clocks.
The mode of the external clock equipment is 2 Mbit/s, and the timeslot is SA4.
The clock source priority of each NE is set according to the clock source priority list
provided in Figure 20-34.
3-PEG16-2
NE1
3-PEG16-2
3-PEG16-1
NE2
3-PEG16-2 clock source
NE4
3-PEG16-1
3-PEG16-2
NE3
3-PEG16-2
3-PEG16-1
751
20 Clock Solution
Procedure
Step 1 Configure the clock source priority for each NE. For details, see 20.3.9.1 Configuring NE Clock
Sources.
Set the following parameters for NE1:
l Clock Source: external clock source 1 that is connected to the BITS, 3-PEG16-2 optical port
that is connected to an upstream NE, 3-PEG16-1 optical port that is connected to a
downstream NE, and internal clock source.
l External Clock Source Mode: 2 Mbit/s
l Synchronous Status Byte: SA4
Set the following parameters for NE2:
l Clock Source: 3-PEG16-2 optical port that is connected to an upstream NE, 3-PEG16-1
optical port that is connected to a downstream NE, and internal clock source.
Set the following parameters for NE3 and NE4:
l Clock Source: 3-PEG16-1 optical port that is connected to an upstream NE, 3-PEG16-2
optical port that is connected to a downstream NE, and internal clock source.
Step 2 Configure the affiliated clock subnet of each NE. For details, see 20.3.9.2 Configuring
Protection for Clock Sources.
Set the following parameter for NE1 to NE4:
l Affiliated Subnet: 1
Step 3 Enable the extended SSM protocol for each NE. For details, see 20.3.9.2 Configuring
Protection for Clock Sources.
Set the following parameter for NE1 to NE4:
l Protection Status: Start Extended SSM Protocol
Step 4 Configure the clock ID for each clock source. For details, see 20.3.9.2 Configuring Protection
for Clock Sources.
Set the following parameters for NE1 to NE4.
NE
Clock Source
Clock Source ID
NE1
BITS
3-PEG16-2
3-PEG16-1
3-PEG16-2
3-PEG16-1
3-PEG16-1
3-PEG16-2
NE2
NE3
Issue 03 (2012-11-30)
752
NE
NE4
20 Clock Solution
Clock Source
Clock Source ID
3-PEG16-1
3-PEG16-2
NOTE
The clock source ID can be set to any value within the permitted range.
----End
Prerequisites
l
Procedure
Step 1 In the NE Explorer, select the NE from the Object Tree and then choose Configuration >
Clock > Physical Layer Clock > Clock Source Priority from the Function Tree.
Step 2 Click the System Clock Source Priority List tab in the pane on the right. Then, click Query
to query the priority sequence of the clock sources.
Step 3 In the NE Explorer, select the NE from the Object Tree and then choose Configuration >
Clock > Physical Layer Clock > Clock Subnet Configuration from the Function Tree.
Step 4 Click the Clock Subnet tab in the pane on the right. Then, click Query to query the IDs of the
clock sources.
Step 5 In the NE Explorer, select the NE from the Object Tree and then choose Configuration >
Clock > Physical Layer Clock > Clock Source Switching from the Function Tree.
Step 6 Click the Clock Source Switching tab in the pane on the right. Then, click Query to query the
statuses of the clock sources.
NOTE
If the value of Switching of a clock source is Normal, the synchronous Ethernet clock is configured
successfully.
----End
20.3.12 Troubleshooting
Following a certain troubleshooting flow facilitates your fault locating and rectification.
Issue 03 (2012-11-30)
753
20 Clock Solution
Fault Symptom
The common faults regarding the synchronous Ethernet clock are as follows:
A clock source cannot be properly switched and the quality of clock is low.
Possible Causes
l
An alarm associated with the synchronous Ethernet clock function occurs on the network.
Troubleshooting Flow
Following a certain troubleshooting flow facilitates your fault locating and rectification.
Figure 20-35 Flow chart for handling a synchronous Ethernet clock fault
Start
Does the
connection at
the port fail?
Yes
No
Does an alarm
associated with the
synchronous Ethernet
clock occurs?
Yes
No
No
Is the fault
rectified?
Yes
End
Procedure
Step 1 Cause 1: The connection at the port fails.
1.
Issue 03 (2012-11-30)
754
2.
20 Clock Solution
If yes, clear the alarm and then check whether the fault is rectified. For details on how to
clear the alarm, see the Alarms and Performance Events Reference.
Step 2 Cause 2: An alarm associated with the synchronous Ethernet clock occurs on the network.
1.
Check whether any alarm associated with the synchronous Ethernet clock occurs on the
network. For details, see 20.3.13.1 Relevant Alarms.
2.
If yes, clear the alarm. For details on how to clear the alarm, see the Alarms and
Performance Events Reference.
Step 3 If the fault persists, contact Huawei technical support engineers to handle the fault.
----End
Meaning
CLK_NO_TRACE_MODE
EXT_SYNC_LOS
LTI
S1_SYN_CHANGE
SYNC_F_M_SWITCH
SYNC_LOCKOFF
SYNC_C_LOS
SYN_BAD
OOL
755
20 Clock Solution
Value Range
Description
NE Name
NE Synchronous Clock
Output<2M Phase-Locked
Source>
Output Impedance of
External Clock Source 1
Issue 03 (2012-11-30)
756
20 Clock Solution
Field
Value Range
Description
Output Impedance of
External Clock Source 2
2 Mbit/s, 2 MHz
2 Mbit/s, 2 MHz
Table 20-25 lists the parameters for configuring the 2M phase-locked source attributes of an
external clock.
Table 20-25 Parameters for configuring the 2M phase-locked source attributes of an external
clock
Issue 03 (2012-11-30)
Field
Value Range
Description
2M Phase-Locked Source
Number
757
20 Clock Solution
Field
Value Range
Description
Open, Close
2 MHz, 2 Mbit/s
Issue 03 (2012-11-30)
758
20 Clock Solution
Field
Value Range
Description
2M Phase-Locked Source
Fail Condition
2M Output S1 Byte
Unavailable, Send AIS, Shut
Down Output
Default value: Shut Down
Output
Issue 03 (2012-11-30)
Field
Value Range
Description
NE Name
759
20 Clock Solution
Field
Value Range
Description
Auto-Revertive, NonRevertive
0 to 12
Default value: 5
Clock Source
Issue 03 (2012-11-30)
760
20 Clock Solution
Field
Value Range
Description
Yes, No
Default value: No
Yes, No
Default value: No
Yes
Issue 03 (2012-11-30)
CV Threshold-Crossing
Generated
Yes, No
CV Threshold
761
20 Clock Solution
Field
Value Range
Description
Yes, No
Effective Status
Valid, Invalid
Lock Status
Lock, Unlock
Specifies whether a
switching operation is
allowed.
Click A.27.15 Lock Status
(Clock) for more
information.
Switching Source
Switching Status
Issue 03 (2012-11-30)
Field
Value Range
Description
NE Name
762
20 Clock Solution
Field
Value Range
Description
S1 Byte Synchronization
Quality Information
For example, NA
For example, NA
Issue 03 (2012-11-30)
763
20 Clock Solution
Field
Value Range
Description
Synchronous Source
Unknown Synchronization
Quality, G.811 Clock Signal,
G.812 Transit Clock Signal,
G.812 Local Clock Signal, G.
813 SDH Equipment Timing
source (SETS) Signal, Do
Not Use For
Synchronization, Automatic
Extraction, NA
Issue 03 (2012-11-30)
764
20 Clock Solution
Value Range
Description
Retiming Mode
20.4.1 Introduction
This section provides the definition of CES ACR and describes its purpose.
Definition
CES ACR is a function that uses the adaptive clock recovery (ACR) technology to recover clock
synchronization information carried by CES packets. In the standard CES ACR solution, the
source end (Master) considers the local clock as the timestamp in the Real-time Transport
Protocol (RTP) packet header and encapsulates it in the CES packet; the sink end (Slave) recovers
the clock according to the timestamp in the packet. In this manner, signal impairment during the
transmission is prevented.
The OptiX OSN 7500 II adopts the enhanced timestamp clock solution. That is, clocks can be
recovered based on SN in CES packets rather than timestamps in RTP packet headers. See
Figure 20-36.
Issue 03 (2012-11-30)
765
20 Clock Solution
Master
Slave
SN
E1
Processing
SN
CES
CES
Processing
E1
Primary
reference
clock
PSN
E1
BTS
E1
CES
PE1
PE2
BSC
Purpose
In the packet domain, enhanced CES ACR is mainly used to transparently transmit E1 clocks
in the PSN. For details, see 20.4.2.2 Enhanced CES ACR Clock Synchronization Solution.
Issue 03 (2012-11-30)
766
20 Clock Solution
Slave
CES
ACR1
CES packet
processing
CES
Clock
recovery
E1
E1
All the clocks on the PSN are synchronous, but the clocks on the PSN are not synchronized
with the clock of the incoming service.
Figure 20-38 is a typical application example of the enhanced CES ACR clock synchronization
solution.
Issue 03 (2012-11-30)
767
20 Clock Solution
Figure 20-38 Typical application example of the enhanced CES ACR clock synchronization
solution
SN
E1
Processing
SN
CES
CES
Processing
E1
Primary
reference
clock
PSN
E1
BTS
CES
PE1
(Slave)
E1
PE2
(Master)
BSC
In this example, the clock of the BSC needs to be transparently transmitted to the BTS along
with the CES service, but the clock of PE1 is not synchronous with the clock of PE2. In this
case, PE2 (Master) extracts the clock of the BSC from the E1 port, and controls the transmission
interval of CES packets according to the extracted clock. PE1 (Slave) recovers the clock of the
BSC according to the SN in the received CES packet, and transmits the recovered clock to the
BTS through the E1 port. In this manner, the clock of the BTS is synchronized with the clock
of BSC.
NOTE
l If the clock of PE1 is synchronous with the clock of PE2 but the two clocks are asynchronous with the
reference clock of the BSC, the enhanced CES ACR clock synchronization solution can also be used
to transmit the clock of the BSC.
l If the clock of PE1 is synchronous with the clock of PE2 and the two clocks are also synchronous with
the reference clock of the BSC, CES retiming is used to transmit the clock of the BSC.
Issue 03 (2012-11-30)
768
20 Clock Solution
Figure 20-39 Typical application example of the standard CES ACR clock synchronization
solution
System
clock
System
clock
Time
stamp
E1
TDM
network
E1
BTS
NE3
Processing
Time
stamp
CES CES
PSN
E1
NE4
Processing
E1
CES
PE1
(Slave)
E1
Primary
reference
clock
PE2
(Master)
BSC
In this example, the clock of PE2 is synchronous with the clock of the BSC, and PE2 needs to
provide a synchronization clock for PE1, as well as NE3 and NE4 on the downstream TDM
network. In this case, PE2 (Master) adds a timestamp to the CES service according to its system
clock. PE1 (Slave) recovers the ACR clock according to the timestamp in the CES service, and
uses the ACR clock as its system clock. NE3 and NE4 on the downstream TDM network trace
the clock of PE1. In this manner, clocks of the NEs are synchronized.
IETF RFC 4197: Requirements for Edge-to-Edge Emulation of Time Division Multiplexed
(TDM) Circuits over Packet Switching Networks
IETF RFC 4553: Structure-Agnostic Time Division Multiplexing (TDM) over Packet
(SAToP)
IETF RFC 5086: Structure-Aware Time Division Multiplexed (TDM) Circuit Emulation
Service over Packet Switched Network (CESoPSN)
20.4.4 Availability
The CES ACR function requires the support of the applicable equipment, boards, and software.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
769
20 Clock Solution
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Applicable Board
Applicable Version
Applicable Equipment
TNN1CO1
TNN1D75E
TNN1D12E
N1CQ1
N1MD12
N1MD75
R1ML1
Issue 03 (2012-11-30)
Applicable
Object
Remarks
CES ACR
CES ACR
770
20 Clock Solution
Applicable
Object
Remarks
CES ACR
CES ACR
CES ACR
CES ACR
Master NE
Maintenance Principles
None.
Issue 03 (2012-11-30)
771
20 Clock Solution
20.4.6 Principles
In the enhanced timestamp solution, enhanced CES ACR uses SN carried by CES packets to
recover time information.
Implementation on Master
Figure 20-40 shows the ACR implementation process on Master.
Figure 20-40 Implementation process on Master
Master
E1 line clock
E1
CES
Encapsualting Transmitting
packets
packets
Master extracts clock frequency information from an E1 signal, and transmit a CES packet based
on the clock frequency information.
Implementation on Slave
Figure 20-41 shows the ACR implementation process on Slave.
Issue 03 (2012-11-30)
772
20 Clock Solution
Slave
ACR clock
Generating
timestamps
Recording
arrival
time
CES
Local
timestamp
SN
Receiving Decapsualting E1
packets
packets
ACR clock
computing
module
ACR
clock
E1
Buffer
1.
The clock module uses the ACR clock to count tick values at a certain frequency.
2.
The clock module records the tick value corresponding to the arrival time of each CES
packet, and outputs the tick values to the ACR clock computing module.
3.
Slave decapsulates the CES services and recovers the SNs of the CES packets. Then, Slave
outputs the SNs to the ACR clock computing module.
4.
The ACR clock computing module recovers the ACR clock based on the tick values, SNs,
and packet loading time.
The ACR clock computing principles are as follows.
l Assume that the tick value increases by 1 every 10 us and that the packet loading time
is 1000 us.
l Assume that the tick value corresponding to the arrival time of previous packet is t1,
and that the tick value corresponding to the arrival time of current packet is t2.
l If the SNs of two packets are consecutive, the packet loading time is 1000 us. Then, the
tick difference should be 100 when the ACR clock is synchronized with the E1 clock
that is extracted on Master.
l If t2 - t1 < 100, the ACR clock frequency is lower than that of the E1 clock extracted
on Master. In this case, the ACR clock computing module will increase the ACR clock
frequency. If t2 - t1 > 100, the ACR clock frequency is higher than that of the E1 clock
extracted on Master. In this case, the ACR clock computing module will decrease the
ACR clock frequency. By increasing or decreasing the ACR clock frequency, the
difference of t2 minus t1 is always 100.
5.
Issue 03 (2012-11-30)
After the CES packet on Slave is recovered to an E1 bit stream, it is written into the First
In, First Out (FIFO) queue. Then, the E1 signal is read from the FIFO queue based on the
ACR clock. As a result, the output E1 signal contains the ACR clock information (that is,
the ACR clock is synchronized with the E1 clock of Master). In this manner, the E1 clock
is transparently transmitted.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
773
20 Clock Solution
For enhanced CES ACR, Sequence Number Mode must be set to a same value for CES services configured at
master equipment and slave equipment.
Operation
Description
Required.
Configure the primary clock for an ACR clock
domain. Set the major parameters as follows.
Set CES Service corresponding to ACR Clock
Source.
Required.
NOTE
For details, refer to "Configuring E1
Ports" in Configuration Guide (Packet
Transport Domain).
Prerequisites
l
Context
NOTE
l An ACR clock domain can bind only the CES services from the E1 ports on a local board.
l A maximum of four ACR clock domains can bind CES services.
Issue 03 (2012-11-30)
774
20 Clock Solution
Procedure
Step 1 In the NE Explorer, select the required NE from the Object Tree.
Step 2 Choose Configuration > Clock > ACR Clock from the Function Tree.
Step 3 In CES Service, select an CES service for primary clock extraction.
Step 4 Click Apply. Then, close the dialog box that is displayed.
----End
Prerequisites
l
Context
NOTE
l E1 ports output clocks from the system clock domain by default. Therefore, it is unnecessary to set
application ports to the system clock domain if system clocks are to be used.
l An ACR clock domain can only be applied to the E1 ports on a local board.
l The E1 ports corresponding to the primary clock for an ACR clock domain must be added to the ACR
clock domain.
l The four ACR clock domains can bind the CES services either from the former 16 E1 ports or from
the latter 16 E1 ports on a local board. That is, the four ACR clock domains cannot simultaneously
bind the CES services from the former 16 E1 ports and from the latter 16 E1 ports on a local board.
Procedure
Step 1 In the NE Explorer, select the required NE from the Object Tree.
Step 2 Choose Configuration > Clock > Clock Domain from the Function Tree.
Step 3 Click New.
Step 4 Configure the parameters of the Create Clock Domain Port tab according to planning
information.
1.
2.
In Clock Domain Board, select the board where the ACR clock domain resides.
3.
Issue 03 (2012-11-30)
a.
b.
c.
Click
775
20 Clock Solution
Step 5 Click Apply. Then, close the dialog box that is displayed.
----End
BTS1
E1
E1
CES
E1
E1
NE1
(Slave)
BTS2
NE2
(Master)
BSC
Planning Information About the Primary Clock for the ACR Clock Domain on
Slave
Table 20-30 provides the planning information about the primary clock for ACR clock domain
1 on NE 1 (Slave).
Issue 03 (2012-11-30)
776
20 Clock Solution
Table 20-30 Planning information about the primary clock for ACR clock domain
Attribute
Description
15-N1EG8-ACR1
CES Service
CES_BTS1
Description
Clock Domain
15-N1EG8
Board
35-N1D12E
Selected Port
35-N1D12E-1(CES_BTS1)
35-N1D12E-2(CES_BTS2)
Description
Clock Mode
NE1(35-N1D12E-1, 35N1D12E-2)
NE2(35-N1D12E-1)
Slave Mode
Prerequisites
l
You must understand the networking, requirements and service planning of the example.
Issue 03 (2012-11-30)
777
20 Clock Solution
Procedure
Step 1 In the NE Explorer, select the required NE1 from the Object Tree.
Step 2 Configure a primary clock in the CES ACR clock domain for NE1 (Slave).
1.
Choose Configuration > Clock > ACR Clock from the Function Tree.
2.
3.
Attribute
Description
15-N1EG8-ACR1
CES Service
CES_BTS1
Step 3 On NE1 (Slave), configure the ports that use the CES ACR clock domain.
1.
Choose Configuration > Clock > Clock Domain from the Function Tree.
2.
Click New, The Create Clock Domain Port dialog box is displayed.
3.
4.
In Clock Domain Board, select the board where the ACR clock domain resides.
5.
b.
c.
Click
Attribute
Description
Clock Domain
15-N1EG8
Board
35-N1D12E
Selected Port
35-N1D12E-1(CES_BTS1)
35-N1D12E-2(CES_BTS2)
6.
Step 4 Configure Advanced Attributes of the ports inputting E1 clocks on NE2 (Master) and NE1
(Slave).
1.
In the NE Explorer, select the NE from the Object Tree and choose Configuration > Packet
Configuration > Interface Management > PDH Interface from the Function Tree.
2.
3.
Select the required port and set the parameters for its advanced attributes.
Issue 03 (2012-11-30)
778
20 Clock Solution
Clock Mode
4.
Description
NE1(35-N1D12E-1, 35N1D12E-2)
NE2(35-N1D12E-1)
Slave Mode
----End
Meaning
CES_ACR_LOCK_AB
N
Issue 03 (2012-11-30)
779
20 Clock Solution
Value
Description
ACR Clock
Source
Track Mode
Lock Status
Lock, Unlock
Value
Description
Clock
Domain
Clock
Domain
Board
Clock Port
Issue 03 (2012-11-30)
780
20 Clock Solution
Value
Description
Clock
Domain
Board
Available
Port
Issue 03 (2012-11-30)
781
21 Outband DCN
21
Outband DCN
782
21 Outband DCN
that traverses a third-party network. When this solution is used, the transmission bandwidth of
one E1 service is occupied. The solution can be used if management messages need to traverse
third-party equipment but none of OSI over DCC, IP over DCC, and DCC transparent
transmission is supported.
21.7 DCN Maintenance
This chapter describes the troubleshooting, maintenance cases, and relevant alarms and
performance events of the DCN.
Issue 03 (2012-11-30)
783
21 Outband DCN
NMS
External DCN
Internal DCN
Router
LAN switch
External DCN
On an actual network, the U2000 and NEs may be located on different floors of a building,
or in different buildings, or even in different cities. Hence, an external DCN that is
Issue 03 (2012-11-30)
784
21 Outband DCN
comprised of the data communication equipment, such as LAN switches and routers, is
required to connect the U2000 and the NEs. As the external DCN involves knowledge of
data communication, no detailed description is provided in this document. The DCN
mentioned in this document refers to the internal DCN, unless otherwise specified.
l
Internal DCN
On an internal DCN, the equipment uses the DCC bytes in overheads as physical channels
of the DCN.
HWECC Solution
When the networking is comprised of only the OptiX transmission equipment, the HWECC
solution is preferred.
In this solution, NEs transmit the data that supports the HWECC protocol through DCCs. The
solution features easy configuration and convenient application. As the HWECC protocol is a
proprietary protocol, the management problems cannot be solved when the networking is
comprised of both the OptiX equipment and the third-party equipment.
For details on the HWECC solution, see HWECC Solution.
785
21 Outband DCN
HWECC Protocol
ITU-T G.784 defines the architecture of the ECC protocol stack based on the OSI seven-layer
reference model. The HWECC protocol stack is based on the ECC protocol stack.
The HWECC protocol consists of four layers: physical layer (DCC), media access layer, network
layer, and transmission layer. See Figure 21-2.
Issue 03 (2012-11-30)
786
21 Outband DCN
Transport layer
Network layer
Network layer
Media access
layer
Physical layer
Physical layer
HW ECC
protocol stack
OSI model
Physical layer
The main function of the physical layer is to control physical channels. The physical layer
performs the following functions:
Receives and sends data over the physical channels, including receiving data from
physical channels and transferring the data to the upper layer.
Receives the data frames transferred from the upper layer and sends them to physical
channels.
The channels at the physical layer include DCC channels and extended ECC channels. The
physical layer can process the data frame with a maximum of 1024 bytes.
Issue 03 (2012-11-30)
787
21 Outband DCN
corresponding physical channel in the physical layer through the MAC connection.
Otherwise, the MAC layer discards the data frame.
l
Network layer
The main function of the network layer is to provide the route addressing for data frames
and the route management for the DCC communication network. The network layer
performs the following functions:
Establishes and maintains ECC routes.
Each route item includes the following information: address of the destination NE,
address of the transfer NE, transfer distance (the number of passed transfer NEs), route
priority (The priority value ranges from 1 to 7. The priority of an automatically
established route is 4 by default. The system always selects the route with the highest
priority.), and mode (0 represents the automatic route and 1 represents the manual route).
Provides the data communication service.
The network layer receives the packet transferred from the MAC layer. If the destination
address of the packet is the local station, the network layer transfers the packet to the
transport layer. Otherwise, the network layer requests the MAC layer to transfer the
packet to the transfer station according to the route item that matches the destination
address in the network layer routing table.
The network layer sends the packet from the transport layer. The network layer requests
the MAC layer to transfer the packet to the transfer station according to the route item
that matches the destination address of the packet in the network layer routing table.
Transport layer
The main function of the transport layer (L4 layer) is to provide the end-to-end
communication service for the upper layer. As the communication between the OptiX
equipment and the U2000 is controlled by the end-to-end connection-oriented service in
the application layer, the L4 layer provides only the end-to-end connectionless
communication service, that is, transparent data transfer service.
NOTE
In the HWECC protocol stack, the NE address used by each layer is the ID of the NE. The NE ID has 24
bits. The highest eight bits represent the subnet ID (or the extended ID) and the lowest 16 bits represent
the basic ID. For example, if the ID of an NE is 0x090001, the subnet ID of the NE is 9 and the basic ID
is 1.
Extended HWECC
The physical layer of the ECC is DCC, whose data is transmitted based on the fiber. In certain
cases, the network or NE may be independent and there is no DCC channel to the gateway NE
(no fiber connection). The extended ECC refers to the ECC protocol stack that is loaded on the
TCP/IP protocol stack. That is, the HWECC protocol stack is carried through the extended
channel (such as Ethernet) instead of the DCC channel to meet the requirements of special
scenarios. The difference between the extended ECC and the ECC is that the physical layer of
the ECC is the DCC channel and that of the extended ECC is an extended channel (such as
Ethernet channel). Figure 21-3 shows the networking environment with the extended ECC.
Issue 03 (2012-11-30)
788
21 Outband DCN
NMS
Network cable
HUB1
GNE1
NE6
NE2
NE5
NE3
HUB2
NE4
NE12
NE7
NE8
NE9
NE11
NE10
Fiber
Subnet 1
Subnet 2
Supporting Capability
HWECC uses the D1-D3 bytes as the physical transmission path. The D4-D12 or D1-D12 bytes
can also be used.
NOTE
HWECC supports the communication by using fibers or Ethernet cables. When no optical path is available
between nodes, set the extended ECC by using Ethernet cables.
21.2.1.2 Networking
The HWECC protocol supports various networking modes. NEs can be connected through
optical interfaces or Ethernet ports for ECC communication. In certain situations, the HWECC
protocol supports transparent transmission of the OAM information from the third-party
equipment.
The HWECC protocol has the following typical applications depending on the networking.
Issue 03 (2012-11-30)
789
21 Outband DCN
NMS
HUB1
Network cable
GNE1
NE6
NE2
NE5
NE3
NE12
NE7
HUB2
NE8
NE9
NE4
NE11
NE10
Fiber
Subnet 2
Subnet 1
When the network management information is transmitted between Huawei optical network
equipment, there must be a gateway NE that communicates with the U2000. The U2000 is
connected to the gateway NE through Ethernet. Hence, you can test, manage and maintain the
entire network. The U2000 improves the quality of services (QoS) on the network and lowers
the expenditure for maintenance. This ensures the rational use of network resources. Nongateway NEs are connected to the gateway NE through the ECC. This realizes the information
transmission between the U2000 and the non-gateway NEs. In addition, the extended ECC
communication between NEs can be performed through network interfaces, such as NE6 and
NE7.
Third-party NM
HUB
NE
Third-party
equipment
NE
NE
NE
NE
NE
Huawei
equipment
NE
NE
NE
NE
For such networking, the OAM information of the third-party equipment should travel through
Huawei equipment, which provides the function to transparently transmit the DCC. During the
Issue 03 (2012-11-30)
790
21 Outband DCN
transmission, Huawei equipment does not analyze the data. To realize the DCC transparent
transmission, perform the corresponding configuration at each NE along the data transmitting
trail.
21.2.2 Availability
The HWECC solution requires the support of the applicable equipment, boards, and software.
Version Support
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Issue 03 (2012-11-30)
Applicable Board
Applicable Version
Applicable Equipment
N4GSCC
N6GSCC
N1PSXCS
N2PSXCSA
T1PSXCSA
R1PCXLN
TNN1PSXCS
791
21 Outband DCN
Applicable Board
Applicable Version
Applicable Equipment
TNN1SCA
N1SXCSA
N3PSXCSA
The HWECC protocol stack of NEs can communicate with the IP protocol stack.
The HWECC protocol stack of NEs can communicate with the OSI protocol stack in the
same area of the Layer 1.
It is recommended that you adopt the HWECC protocol to manage the OptiX equipment
if DCC bytes are used to transparently transmit NM messages when the OptiX equipment
is interworked with the third-party equipment.
21.2.4 Principles
The HWECC solution is implemented by establishing ECC routes and transferring messages.
In addition, the ECC protocol stack can also flow over extended ECC channels to meet the special
requirements in certain scenarios.
The physical layer of an NE maintains the status information of the DCC to which each
line port corresponds.
2.
The MAC layer of the NE establishes the MAC connection between the NE and the adjacent
NE.
The steps are as follows:
3.
a.
b.
After receiving the MAC_REQ, the adjacent NE returns the connection response
frame (MAC_RSP).
c.
If the MAC_RSP is received within the specified time, the NE establishes a MAC
connection between the NE and the adjacent NE.
The network layer of the NE establishes the network layer routing table.
The steps are as follows:
Issue 03 (2012-11-30)
792
21 Outband DCN
a.
According to the status of the MAC connection, the NE establishes an initial network
layer routing table.
b.
The NE broadcasts its routing table to the adjacent NE in a periodical manner through
the routing response message.
c.
The adjacent NE updates its network layer routing table according to the received
routing response message and the shortest path first algorithm.
d.
At the next route broadcasting time, the NE broadcasts its current network layer
routing table to the adjacent NE.
NE2
NE5
NE3
NE4
The following describes how to establish ECC routes between NEs. The network shown in
Figure 21-6 is provided as an example.
1.
The physical layer of each NE maintains the status information of the DCC to which each
line port corresponds. The physical layer of each NE detects that there are two available
DCCs.
2.
The MAC layer of the NE establishes the MAC connection between the NE and the adjacent
NE.
NE1 is considered as an example to describe how to establish the MAC connection.
3.
a.
NE1 broadcasts the frame MAC_REQ to NE2 and NE5 in a periodical manner through
its two available DCCs. The frame MAC_REQ contains the ID of NE1.
b.
After receiving the frame MAC_REQ, NE2 and NE5 return their respective
MAC_RSP frames. The frame MAC_RSP from NE2 contains the ID of NE2 and the
frame MAC_RSP from NE5 contains the ID of NE5.
c.
After receiving the MAC_RSP frames, NE1 establishes a MAC connection between
NE1 and NE2 and a MAC connection between NE1 and NE5 according to the NE ID,
DCC that reports the frame, and other information.
The network layer of the NE establishes the network layer routing table.
NE1 is considered as an example to describe how to establish the network layer routing
table.
Issue 03 (2012-11-30)
a.
According to the status of the MAC connection, NE1 establishes an initial network
layer routing table. In the routing table, there are two routes, one to NE2 and one to
NE5.
b.
NE1 broadcasts its routing table to adjacent NEs in a periodical manner through the
routing response message.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
793
21 Outband DCN
c.
After receiving the routing response message from NE1, NE2 and NE5 update their
respective network layer routing tables. After the update, there is a route to NE5 in
the network layer routing table of NE2, and the transfer NE is NE1. There is a route
to NE2 in the network layer routing table of NE5, and the transfer NE is also NE1.
d.
Similarly, NE1 also adds the routes to NE3 and NE4 in its NET layer routing table
according to the routing response messages from NE2 and NE5. There are two routes
between NE1 and NE3. The distance of the route whose transfer NE is NE2 is 1 and
the distance of the route whose transfer NE is NE5 is 2. Hence, according to the shortest
path first principle, only the route whose transfer NE is NE2 is retained in the network
layer routing table. The routes to NE4 are processed in the same way as those to NE3.
e.
If the DCC between NE1 and NE2 becomes faulty, the MAC connection between NE1
and NE2 fails. In this case, NE1 updates the routes to NE2 and NE3 in its network
layer routing table according to the routing response message from NE5. Hence, the
routes to NE2 and NE3 are re-established. In this way, the ECC route is protected.
The U2000 transfers application layer messages to the gateway NE through the TCP
connection between them.
2.
The gateway NE extracts the messages from the TCP/IP protocol stack and reports the
messages to the application layer.
3.
The application layer of the gateway NE queries the address of the destination NE in the
messages. If the address of the destination NE is not the same as the address of the local
station, the gateway NE queries the core routing table of the network layer according to the
address of the destination NE to obtain the corresponding route and the communication
protocol stack of the transfer NE. As the communication protocol stack of the transfer NE
in Figure 21-7 is HWECC, the gateway NE transfers the messages to the transfer NE
through the HWECC stack.
4.
After receiving the packet that encapsulates the messages, the network layer of the transfer
NE queries the address of the destination NE of the packet. If the address of the destination
NE is not the same as the address of the local station, the transfer NE queries the network
layer routing table according to the address of the destination NE to obtain the
corresponding route and then transfers the packet.
5.
After receiving the packet, the network layer of the destination NE reports the packet to
the application layer through the Layer 4 because the address of the destination NE of the
packet is the same as the address of the local station. The application layer functions
according to the message sent from the U2000.
Issue 03 (2012-11-30)
794
21 Outband DCN
Application
Application
TCP
TCP
L4
IP
IP
NET
NET
NET
MAC
MAC
MAC
Ethernet
Ethernet
DCC
DCC
DCC
Transfer NE
Destination NE
NMS
L4
Gateway NE
NOTE
The core routing table synthesizes the transport layer routing tables of all communication protocol stacks.
Each route item includes the following information:
l
ID of the destination NE
Transfer distance
NMS
Network cable
HUB1
GNE1
NE6
NE2
NE5
NE3
NE4
HUB2
NE12
NE7
NE8
NE9
NE11
NE10
Fiber
Subnet 1
Issue 03 (2012-11-30)
Subnet 2
795
21 Outband DCN
The extended ECC establishes the MAC connection of adjacent NEs through the TCP
connection. The ECC can be extended in the automatic mode or the manual mode.
The implementation principle of the automatic ECC extension is as follows:
1.
Each NE obtains the IP addresses of other NEs that are on the same Ethernet through the
address resolution protocol (ARP).
2.
The NE with the largest IP address automatically functions as the server and detects the
TCP requests from the clients.
3.
Other NEs automatically function as clients and send TCP connection requests to the server.
4.
After receiving the TCP connection request from a client, the server establishes the
corresponding TCP connection.
5.
The NEs use the TCP connection as a MAC connection to realize ECC communication.
The implementation principle of the manual ECC extension is basically the same as that of the
automatic ECC extension. The difference is that in the manual mode, the server, clients, and
connection port numbers are manually specified.
When the equipment interworks with other Huawei optical network equipment, the
HWECC or IP over DCC protocol is recommended. The entire DCN network must use the
same communication protocol.
When the equipment interworks with the third-party equipment, select the protocol that is
supported by the third-party equipment, that is, the IP over DCC protocol or the OSI over
DCC.
When the equipment interworks with the third-party equipment, select the DCC bytes to
transparently transmit services if the third-party equipment does not support the IP over
DCC protocol or the OSI over DCC protocol.
Regardless of the communication protocol you select to build the DCN network, properly
set the DCN network scale and divide the network according to the network condition to
ease impact of the large network scale.
The DCN network should be configured as a ring to ensure the reliability of the network
communication. Therefore, the routes can be protected when a fiber is cut or any anomaly
occurs on the NE. If fibers of the equipment do not form a ring, extra DCN channels should
be established to realize a ring network and to protect routes.
The management information from the U2000 to ordinary NEs must be forwarded by the
gateway NE. When creating an NE on the U2000, you need to specify the gateway NE first. In
the network-wide planning, select the gateway NE based on the actual networking and the
communication protocol.
Issue 03 (2012-11-30)
796
21 Outband DCN
When planning the gateway NEs, comply with the following principles:
l
Correctly set the IP address and subnet mask for the gateway NE.
Only the NE connected to the U2000 through the network cable can be configured as a
gateway NE.
There should not be too many gateway NEs on a network. Otherwise, the network
performance may be affected. It is recommended that the number of gateway NEs managed
by an NMS does not exceed 100. If the number exceeds 100, use extended ECC to combine
the gateway NEs.
Each gateway NE can manage no more than 100 NEs, and it is recommended that the
maximum number should be 50.
Usually, multiple independent NEs in a network topology (for example, the NEs in a ring
network) should be connected to the same gateway NE.
In the actual networking, the gateway NE has the largest traffic volume. To ensure stable
communication, select the equipment with strong ECC processing capability as the gateway
NE. The gateway NE and other NEs should form a star network, to reduce the traffic volume
of other NEs.
To ensure the reliability of the connection between the network and the U2000, select an
NE as the secondary gateway NE by using the same standard for selecting an NE as the
primary gateway NE. In addition, the secondary gateway NE can also manage certain NEs.
In this way, the two gateway NEs back up each other to enhance the network stability.
An NE ID is a 24-bit binary digit, and is divided into high order 8 bits and low order 16
bits. The high order 8 bits represent the extended ID (default to 9). It is also called subnet
number, because it identifies a subnet. The value of a subnet number cannot be 0 or
0xFF. The low order 16 bits represent the basic ID. The value cannot be 0 or greater than
or equal to 0xBFEF.
On a ring network, the IDs of NEs should be incremented by one along the direction of the
primary ring.
On a ring network, it is recommended to configure the IDs of all the NEs at a station, and
then configure the IDs of all the NEs at the next station.
In the case of a complex network, divide the network into rings and chains. Set the IDs of
the NEs on the ring to numbers from 1 to N, and the IDs of the NEs on the chain to numbers
starting from N+1.
An IP address is used in the communication between a gateway NE and the U2000. Hence,
a gateway NE requires an IP address. In addition, the NE that requires the extended ECC
function should be provided with an IP address.
The NE IP address does not need to be manually set and varies with NE IDs. The format
of an IP address is 129.E.A.B. The second number E is the extended ID of the NE, which
does not vary with the NE IDs. The default value is 9. The third number A and fourth number
B are the high order 8 bits and low order 8 bits of an NE ID. When the NE IP address is
manually set, the mapping between the IP address and NE ID does not exist.
797
21 Outband DCN
If the OptiX OSN 3500 or another type of OptiX equipment series is used as the gateway
NE of a network, the number of NEs on each HWECC subnet should not exceed 200, and
it is recommended that the maximum number should be 100.
Configure the HWECC subnet as a ring to ensure that routes can be protected when a fiber
is cut or any anomaly occurs on the NE.
The OptiX equipment allocates ECC channels to the interfaces on each board automatically.
Disable unnecessary ECC channels considering that the number of ECC channels on the
equipment is limited.
NOTE
Before allocating ECC channels, see Outband DCN to query the DCC resource allocation mode. To
query the actual situation of ECC channels, see Querying and Allocating DCC Resources.
The number of NEs managed by a gateway NE is limited. Therefore, multiple gateway NEs
are required when the number of NEs is large.
In the application where the extended ECC communication is required, the manually
extended ECC is recommended. Do not use the automatically extended ECC, so that the
bandwidth between NEs using extended ECC for communication is saved.
When the number of Huawei devices that use the extended ECC communication exceeds
four, the manually extended ECC communication must be used.
When configuring the manually extended ECC, configure one or more NEs as the server
and other NEs as the client. One server NE can have a maximum of seven client NEs. If
the number of client NEs managed by a server NE exceeds eight, select a client NE as the
server NE for the remaining client NEs. In this case, the client NE functions as a client and
a server at the same time. The rest may be deduced by analogy. The port numbers of the
server NEs must be different.
Issue 03 (2012-11-30)
798
21 Outband DCN
NE203
NE204
NE202
NE201
NE103
NE101
NE102
NMS
NE104
NE106
Hub
Fiber
Network cable
2 Mbit/s channel
Figure 21-9 shows a transmission network that is comprised of only the OptiX equipment. To
plan the HWECC, do as follows:
1.
More than 80 sets of optical transmission equipment are present on the network. Thus,
divide this ECC network into two ECC subnets according to the equipment type.
2.
Select NE101 and NE201, which are located in the middle of the optical transmission
service, as gateway NEs.
3.
As shown in this figure, the U2000 and NE101 are located at the same place, and NE201
is located at another place. In this case, set up an external DCN connecting the U2000 and
NE201 by using a router Quidway 2501.
Issue 03 (2012-11-30)
799
4.
21 Outband DCN
Allocate IDs and IP addresses for all the NEs according to the situation of the network, as
shown in Figure 21-10.
9-201
11.0.0.201
11.0.0.1
9-202
129.9.0.202
0.0.0.0
9-103
129.9.0.103
0.0.0.0
10.0.0.1/16
9-101
10.0.0.101
0.0.0.0
10.0.0.100/16
9-104
129.9.0.104
0.0.0.0
9-106
129.9.0.106
0.0.0.0
9-102
129.9.0.102
0.0.0.0
Extended ID-Basic ID
IP address
Gateway
Issue 03 (2012-11-30)
800
21 Outband DCN
Extended ECC
required?
Yes
Configure extended
ECCs
Dividing the
network into
ECC subnets
required?
4
No
No
Yes
Divide the network
into ECC subnets
Configuring
No
DCC transparent
transmission
required?
Yes
Configure DCC
transparent transmission
6
End
Issue 03 (2012-11-30)
801
21 Outband DCN
Description
l Ensure that the gateway NE has an ECC route to each of its managed
non-gateway NEs.
l Ensure that the gateway NE has no ECC routes to NEs on the other
ECC subnets.
l Ensure that the ECC route uses the shortest path.
l For details on how to query ECCs, see Querying the ECC Routes of
an NE.
Issue 03 (2012-11-30)
802
21 Outband DCN
Prerequisites
l
Procedure
Step 1 Select an NE in the NE Explorer. Choose Configuration > NE Attribute from the Function
Tree.
Step 2 Click Modify NE ID. The Modify NE ID dialog box is displayed.
l After the operation, the communication between the U2000 and the NE is interrupted.
l After you change the ID of the NE, the original configuration information may be lost and thus the
protection and other features fail to work normally.
----End
Follow-up Procedure
After you change the ID of the NE, a warm reset is performed on the SCC board. In this case,
you need to log in to the NE again after a certain period.
Prerequisites
l
Issue 03 (2012-11-30)
803
21 Outband DCN
Procedure
Step 1 Select an NE from the Object Tree in the NE Explorer. Choose Communication >
Communication Parameters from the Function Tree.
Step 2 Set the communication parameters of the NE according to the network planning.
Prerequisites
l
Context
The NE uses D1-D3 as DCC by default and allows the DCC communication.
Issue 03 (2012-11-30)
804
21 Outband DCN
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > DCC
Management from the Function Tree.
Step 2 Click the DCC Rate Configuration tab. Then, click New. The system displays the dialog box.
Set the Port, Channel Type, Protocol Type and LAPD Role fields.
Step 3 Click OK, and then click OK in the Operation Result dialog box.
Step 4 Optional: Click Query to query DCCs.
Step 5 Optional: Select the required DCC, and modify the parameters according to Table 21-2. Then,
click Apply to finish the modification.
Step 6 Optional: Select the required DCC, and click Delete to delete the DCC.
NOTE
When the board is configured with TPS, the DCCs on the working board are automatically copied to the
protection board. Therefore, the DCCs on the protection board cannot be deleted.
----End
Example
Table 21-2 Parameters
Issue 03 (2012-11-30)
Parameter
Value Range
Default Value
Description
Port
805
21 Outband DCN
Parameter
Value Range
Default Value
Description
Channel Type
D1-D1
It is recommended
that you use the
default value except
in the following
cases:
l When the IP over
DCC solution or
OSI over DCC
solution is
applied, set
Channel Type of
the SDH line
ports to the same
value as the
Channel Type of
third-party
network.
l When the DCC
transparent
transmission
solution is
applied, the
Channel Type of
the SDH line
ports cannot
conflict with the
channel type of
the third-party
network.
Protocol Type
IP
It is recommended
that you use the
default value except
in the following
cases:
l When the
HWECC solution
is applied, set
Protocol Type to
HWECC.
l When the OSI
over DCC
solution is
applied, set
Protocol Type to
OSI.
Issue 03 (2012-11-30)
806
21 Outband DCN
Parameter
Value Range
Default Value
Description
LAPD Role
Network, User
Network
l This parameter is
valid only when
Protocol Type is
set to OSI.
l Set LAPD Role
to User at one end
of a DCC and to
Network at the
other end of the
DCC.
Prerequisites
l
Context
You can divide an SDH network into multiple ECC subnets by deleting DCC channels D1-D3
of the optical interface that is on the boundary NE of one subnet and is directly connected to the
equipment on another subnet.
Procedure
Step 1 In the NE Explorer, select the required NE and then choose Communication > DCC
Management from the Function Tree.
Step 2 Click the DCC Rate Configuration tab, right-click DCC channels D1-D3 of the optical
interface that is directly connected to the equipment on another subnet, and select Delete.
----End
Prerequisites
l
Issue 03 (2012-11-30)
The equipment must be installed according to the planning. The connections of the cables
and fibers are correct.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
807
21 Outband DCN
The NE must be created on the U2000. The communication between the U2000 and the
NE must be normal.
Context
When configuring the extended ECC, configure one or multiple NEs as the servers and other
NEs as the clients. One server NE can have a maximum of seven client NEs. If the number of
client NEs managed by a server NE exceeds eight, select a client NE as the server NE for the
remaining client NEs. In this case, the client NE functions as a client and a server at the same
time. The rest may be deduced by analogy. The port numbers of the server NEs must be different.
CAUTION
l The ECC extended mode of the remote NEs must be modified first, and that of the gateway
NE must be modified last.
l The extended ECC communication is avoided between the subnet gateway NEs.
l Do not set the gateway NE to the server. The NE closest to the gateway NE is recommended
to be the server NE.
CAUTION
l When setting the ECC extended mode remotely, strictly comply with the required setting
sequence. Otherwise, the communication between the U2000 and the NE where the
communication with the U2000 stops cannot be restored automatically. In this case, on-site
resetting is required. Hence, when setting the ECC extended mode remotely, work out the
ECC setting plan in advance to ensure that the settings are correct.
Procedure
Step 1 Setting the client NE
1.
In the status figure of the optical NE (ONE), right-click the server NE that is defined in the
ECC configuration plan and select the NE Explorer.
2.
Choose Communication > ECC Management from the left-hand Function Tree.
3.
Issue 03 (2012-11-30)
808
21 Outband DCN
4.
Set ECC Extended Mode to Specified mode in the right-hand Functional Panel.
5.
In the Set Client area, enter the IP address of the server NE in the Opposite IP field. Then
enter a port number in the Port field.
NOTE
The port number is used by the local NE for communication with the server NE. The port number
cannot be the same as the value of the Port field in the Set Server area.
6.
7.
The Operation Result dialog box is displayed, indicating that the operation is successful.
Click Close.
NOTE
l The IP addresses of NEs cannot be repeated and must be within the same subnet.
l The client NE can be the server NE of the next lower level. At that time, the client port and the server
port of the local NE cannot be the same. For specific procedure, see "Setting the Server NE."
l The port number must be within the range from 1601 to 1699, for example, 1610.
2.
Double-click the ONE icon, and the status figure of the ONE is displayed.
3.
4.
Choose Communication > ECC Management from the left-hand Function Tree.
5.
6.
Set ECC Extended Mode to Specified mode in the right-hand Functional Panel.
7.
Enter the port number in the Port field of the Set Server area. The port number must be
the same as the port number you enter in the Port field of the Set Client area of the client
NE.
NOTE
l The port number is used by the local NE for communication with the client NE.
l The port number of the server NE must be the same as the port number of the client NE.
8.
9.
The Operation Result dialog box is displayed, indicating that the operation is successful.
Click Close.
----End
Issue 03 (2012-11-30)
809
21 Outband DCN
Prerequisites
l
The equipment must be installed according to the planning. The connections of the cables
and fibers are correct.
The NE must be created on the U2000. The communication between the U2000 and the
NE must be normal.
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > DCC
Management from the Function Tree.
Step 2 Click the DCC Transparent Transmission Management tab.
Step 3 Click New. The system displays the Create DCC Transparent Transmission Byte dialog box.
Step 4 Set the parameters of the DCC transparent transmission byte.
Step 5 Click OK, and then click OK in the Operation Result dialog box.
----End
Parameters
Issue 03 (2012-11-30)
Parameter
Value Range
Default Value
Description
Source Timeslot/
Port
Line ports
810
21 Outband DCN
Parameter
Value Range
Default Value
Description
Transparent
Transmission of
Overhead Byte at
Source Port
D1
l Only one
overhead byte can
be selected at a
time.
l X1, X2, X3, and
X4 represent the
self-defined
overhead bytes
that are used
when
asynchronous
data services are
transmitted.
l The overhead
byte must not be a
used byte (for
example, the byte
used by a DCC
that is in use).
Sink Timeslot/Port
Line ports
Transparent
Transmission of
Overhead Byte at
Sink Port
D1
l Only one
overhead byte can
be selected at a
time.
l The overhead
byte must not be a
used byte (for
example, the byte
used by a DCC
that is in use).
l Generally,
Transparent
Transmission of
Overhead Byte
at Sink Port is set
to the same value
as Transparent
Transmission of
Overhead Byte
at Source Port.
These two
parameters,
however, can be
set to different
values.
Issue 03 (2012-11-30)
811
21 Outband DCN
Prerequisites
The user must log in to the U2000.
The U2000 must be properly connected to the equipment.
You must be an NM user with "NE and network monitor" authority or higher.
Impact on System
When a warm reset is performed on the SCC board, the specified path mode becomes available.
This operation may affect the service.
Precautions
NOTE
Before changing the DCC path mode, you need to query the allocation information about the DCC path.
If the number of allocated DCC paths of a certain type is more than the number of DCC paths of the type
in the new DCC path mode, you must disable the optical interfaces that are not required to release the DCC
path resources.
NOTE
Before changing the DCC path mode, you need to check whether the type and number of currently used
DCC paths are available in the new DCC path mode. If the type and number of currently used DCC paths
are not available, you can change the path mode only after you delete the paths that are unavailable.
CAUTION
You cannot delete the paths at the optical interfaces that work with the other NEs to form a core
route.
CAUTION
The path rates at the optical interfaces that are interconnected between adjacent NEs should be
consistent with each other. If the path rates are inconsistent with each other, the NEs cannot be
monitored.
Issue 03 (2012-11-30)
812
21 Outband DCN
Procedure
Step 1 In the Workbench view, double-click Main Topology to display the main topology.
Step 2 Right-click the NE in the Main Topology and choose NE Explorer.
Step 3 Select Communication > DCC Management from the Function Tree.
Step 4 On the DCC Rate Configuration tab, click Query to display information such as whether the
current NE is allocated the DCC resources and the channel type.
Step 5 On the DCC Resources Allocation tab, click Query to display information such as DCC Path
Mode, Resource Usage.
Step 6 Optional: On the DCC Rate Configuration tab, select the DCC path to be deleted, and then
click Delete.
Step 7 Optional: On the DCC Rate Configuration tab, select the optical interface to be disabled, and
then set Enabled/Disabled to Disabled.
CAUTION
Do not disable the optical interfaces that are used with the other NEs to form a core route.
Step 8 Click the DCC Rate Configuration tab and click New. In the displayed dialog box, set
Port,Channel Type, and Protocol Type.
Step 9 Click Apply. In the Operation Result dialog box, click Close.
Step 10 Allocate the DCC resources. In the DCC Resource Allocation dialog box, click Allot. The
DCC Resource Allocation dialog box is displayed.
. Click OK. The Confirm
Step 11 In the left pane, select a proper DCC Path Mode. Click
dialog box is displayed twice. Click OK. In the Operation Result dialog box, click Close.
----End
Prerequisites
l
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > NE ECC
Link Management from the Function Tree.
Step 2 Click Refresh and the ECC routes of the NE is displayed in the NE ECC Link Management
Table.
Issue 03 (2012-11-30)
813
21 Outband DCN
Step 3 Check whether the ECC routes and their parameters in NE ECC Link Management Table are
the same as the planned information.
Step 4 Optional: Click Add Manual Route to add manual ECC routes of the NE.
----End
Context
For the parameters in this example, see the example in Planning Guide.
The example provides the configurations of only the typical NEs, including NE101, NE102,
NE104, NE106, and NE201.
Configuration of the extended ECC and DCC transparent transmission is not required.
Procedure
Step 1 Set IDs of the NEs. For details on operations, see Changing the ID of an NE.
NE
Extended ID
NE ID
NE101
101
NE102
102
NE103
103
NE104
104
NE106
106
NE201
201
Step 2 Set IP addresses of the NEs. For details on operations, see Setting Communication Parameters
of an NE.
Issue 03 (2012-11-30)
NE
IP Address
Subnet Mask
Default Gateway
NE101
10.0.0.101
255.255.255.0
0.0.0.0 (default
value)
NE102
129.9.0.102
255.255.255.0
0.0.0.0 (default
value)
NE103
129.9.0.103
255.255.255.0
0.0.0.0 (default
value)
814
21 Outband DCN
NE
IP Address
Subnet Mask
Default Gateway
NE104
129.9.0.104
255.255.255.0
0.0.0.0 (default
value)
NE106
129.9.0.106
255.255.255.0
0.0.0.0 (default
value)
NE201
11.0.0.201
255.255.255.0
11.0.0.1
Step 3 Configure DCCs of the NEs. For details on operations, see Configuring DCCs.
l Set the protocol type that the DCC uses to HWECC for all the line ports of the NEs.
l Set the channel type to D1-D3 for all the SDH ports on the NEs.
Step 4 Query ECC routes at NE101 and NE201. For details on operations, see Querying the ECC Routes
of an NE.
l At NE101, the routes to NE102, NE103, NE104, and NE106 are queried.
l At NE201, the routes to NE202, NE203, and NE204 are queried.
----End
815
21 Outband DCN
Routing protocol
OSPF
Transport layer
TCP/UDP
Network layer
IP
PPP
DCC
Physical layer
The main function of the physical layer is to provide channels for data transmission for the
data end equipment.
Physical channels are classified into the following two categories:
DCC channel: The SDH-frame DCC bytes or the OTN-frame GCC bytes are used as
the channels for the communication between NEs.
Ethernet physical channel: The NE provides the Ethernet physical channel through the
Ethernet NM port or the NE cascading port.
Network layer
The main function of the network layer is to specify the network layer address for a
network entity and to provide the transferring and addressing functions.
The NE adopts the IP and the matching ARP and ICMP to realize the network layer
functions.
Transport layer
The main function of the transport layer is to provide the end-to-end communication service
for the upper layer. The NE supports the connection-oriented TCP and the connectionlessoriented UDP.
Routing protocols
Routing protocols belong to the scope of the application layer. The NE supports Open
Shortest Path First (OSPF).
The OSPF protocol is a dynamic routing protocol that is based on the link status. The OSPF
protocol divides an autonomous system into several areas. Route nodes exchange routing
information in an area. The route nodes at the edge of an area make summary and exchange
Issue 03 (2012-11-30)
816
21 Outband DCN
information with the routers in other areas. Areas are identified by area IDs. The area ID
has the same format as the IP address.
Currently, the OSPF protocol of the OptiX equipment supports only the routes within an
area and does not support the routes between areas. Hence, the gateway NE and all its
managed non-gateway NEs must be in the same OSPF area. By default, the line port of the
OptiX equipment is enabled with the OSPF protocol but the Ethernet port is not enabled
with the OSPF protocol. Hence, to form a network through the Ethernet port, you need to
modify the OSPF setting of the NE.
In addition to the dynamic routing protocol, the NE supports static routes. Static routes are
manually configured routes. Static routes have a higher priority than dynamic routes. When
there is a route conflict, the equipment selects static routes.
Access Modes
In the IP over DCC solution, there are two modes for the U2000 to access an NE, namely, gateway
mode and direct connection mode.
l
Gateway Mode
In the gateway mode, the U2000 accesses a non-gateway NE through the gateway NE. The
gateway NE queries the core routing table of the application layer according to the ID of
the NE to be accessed to obtain the corresponding route.
The core routing table synthesizes the transport layer routing tables of all communication
protocol stacks. Each route item includes the following:
ID of the destination NE
Address of the transfer NE
Communication protocol stack of the transfer NE
Transfer distance
OSPF
The Open Shortest Path First (OSPF) protocol is a dynamic routing protocol that is based on the
link status. NEs update their routing tables dynamically through the OSPF protocol. The OSPF
protocol divides an autonomous domain into several areas and updates the routes in the areas or
between areas. Generally, the gateway NE and the non-gateway NEs that are managed by the
gateway NE are in the same OSPF area.
Issue 03 (2012-11-30)
817
21 Outband DCN
21.3.1.2 Networking
The application of the IP protocol enables the hybrid networking of Huawei equipment and
equipment of other vendors. The typical networking modes of the DCN based on the IP over
DCC communication are described as follows.
IP Over DCC
Third party
equipment
Third party
equipment
Third party
equipment
IP Over DCC
Third party
equipment
Third party
equipment
Issue 03 (2012-11-30)
818
21 Outband DCN
21.3.2 Availability
The IP over DCC solution requires the support of the applicable equipment, boards, and software.
Version Support
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Issue 03 (2012-11-30)
Applicable Board
Applicable Version
Applicable Equipment
N4GSCC
N6GSCC
N1PSXCS
N2PSXCSA
T1PSXCSA
R1PCXLN
TNN1PSXCS
TNN1SCA
819
21 Outband DCN
Applicable Board
Applicable Version
Applicable Equipment
N1SXCSA
N3PSXCSA
The IP protocol stack of NEs can communicate with the HWECC protocol stack.
The IP protocol stack of NEs cannot communicate with the OSI protocol stack.
If DCC bytes are used to transparently transmit NM messages when the OptiX equipment
is used together with the third-party equipment to form a network, you can adopt the IP
protocol stack to manage the OptiX equipment. It is, however, recommended that you use
the HWECC protocol.
21.3.4 Principles
This topic describes how an NE transfers messages depending on the mode in which the
U2000 accesses an NE. The implementation principles in different modes vary slightly from
each other.
Gateway Mode
Figure 21-15 illustrates how the IP over DCC solution transfers the messages originating from
the U2000 to a non-gateway NE when the U2000 adopts the gateway mode to access the NE.
Figure 21-15 Implementation principle of message transferring (gateway mode)
Application
Application
Application
TCP
TCP
TCP
IP
IP
IP
IP
IP
PPP
PPP
PPP
DCC
DCC
DCC
Transfer NE
Destination NE
Ethernet
NMS
TCP
Ethernet
Gateway NE
820
21 Outband DCN
1.
The U2000 transfers application layer messages to the gateway NE through the TCP
connection between them.
2.
The gateway NE extracts the messages from the TCP/IP protocol stack and reports the
messages to the application layer.
3.
The network layer of the gateway NE queries the address of the destination NE in the
messages. If the address of the destination NE is not the same as the address of the local
station, the gateway NE queries the core routing table of the network layer according to the
address of the destination NE to obtain the corresponding route and the communication
protocol stack of the transfer NE. As the communication protocol stack of the transfer NE
in Figure 21-15 is IP, the gateway NE transfers the messages to the transfer NE through
the IP protocol stack.
4.
After receiving the packet that encapsulates the messages, the network layer of the transfer
NE queries the destination IP address of the packet. If the destination IP address is not the
NE IP address of the local station, the transfer NE queries the IP routing table according
to the destination IP address to obtain the corresponding route, and then transfers the packet.
5.
After receiving the packet, the network layer of the destination NE reports the packet to
the application layer through the transport layer because the destination IP address of the
packet is the NE IP address of the local station. The application layer acts according to the
message sent from the U2000.
Application
TCP
TCP
IP
Ethernet
IP
IP
IP
PPP
PPP
PPP
DCC
DCC
DCC
Transfer NE
Destination NE
Ethernet
T2000
Transfer NE
The original gateway NE in the direct connection mode acts as an ordinary transfer NE and the
message transferring is implemented at the network layer. This is different from the gateway
mode.
Issue 03 (2012-11-30)
821
21 Outband DCN
When the U2000 is used to manage NEs, the number of non-gateway NEs accessed through
one gateway NE should not exceed 100, and it is recommended that the maximum number
should be 50.
The gateway NE and the non-gateway NEs that are managed by the gateway NE must be
in the same OSPF area.
Plan static routes when the U2000 and the gateway NE or the NEs that need to be directly
accessed by the U2000 cannot be interconnected through the dynamic route at the network
layer.
Configure the IP addresses of standard class A, B or C for NEs. That is, the IP address of
an NE ranges from 1.0.0.1 to 223.255.255.254. The broadcast IP address, network IP
address, and IP address such as 127.x.x.x, however, should be excluded. The subnet
addresses such as 192.168.x.x and 192.169.x.x should also be excluded.
The IP address should be used together with the subnet mask. The subnet mask of variable
length is supported.
When an NE accesses the U2000 using the static routing protocol, use different IP subnets
for the gateway NE and non-gateway NEs.
Two networks connected through Ethernet should be classified into different IP subnets to
avoid that certain NEs fail to access the U2000 when the network is divided into domains.
If the network is comprised of only the OptiX equipment, it is recommended that you use
bytes D1 to D3 in SDH frames as DCCs.
If the network is comprised of both OptiX equipment and the third-party SDH equipment,
use the DCC bytes that are used by the third-party equipment (for example, bytes D1 to D3
or D4 to D12) as DCCs.
Issue 03 (2012-11-30)
822
21 Outband DCN
NE106
NE208
Third-party
NMS
NE207
NE104
NMS
NE101
NE103
NE102
Hub
Fiber
2 Mbit/s channel
Figure 21-18 shows a transmission network that is comprised of both OptiX equipment and the
third-party equipment that supports the IP over DCC feature. The steps to plan the DCN are as
follows:
1.
Select NE101 that is the closest to the U2000 as the gateway NE.
2.
As the third-party equipment uses the DCC bytes D1 to D3, it is not required to modify the
DCCs of the NEs.
3.
Allocate IDs and IP addresses for all the NEs according to the situation of the network. The
subnet mask of all the NEs is 255.255.0.0.
Issue 03 (2012-11-30)
823
21 Outband DCN
9-105
200.1.0.105
0.0.0.0
9-106
200.1.0.106
0.0.0.0
10.0.0.100/16
9-104
100.0.0.105
0.0.0.0
100.1.0.1/16
100.1.0.100/16
9-101
100.1.0.101
0.0.0.0
9-107
200.0.0.107
0.0.0.0
9-103
100.0.0.103
0.0.0.0
9-102
100.0.0.102
0.0.0.0
30.0.0.1/16
30.0.0.4/16
30.0.0.2/16
Extended ID-Basic ID
IP address
Gateway
30.0.0.3/16
4.
Plan IP routes.
l As the number of NEs is smaller than 64 and all the NEs support the OSPF protocol,
configure all the NEs to the same OSPF area.
l The U2000 and the gateway NE (NE101) are in the same Ethernet segment. The gateway
NE can obtain the routes to all other NEs through the OSPF protocol. Therefore, the
U2000 can access any NE in the gateway mode, and you do not need to set any route.
l As there are routers between the third-party NMS and NE101, using only dynamic routes
cannot achieve the connection between the third-party equipment and the U2000.
Therefore, set static routes on NE101, NE102, the U2000 server, and the routers.
Issue 03 (2012-11-30)
824
21 Outband DCN
2
Configure DCCs for
NEs
3
Check the IP routes
of the gateway NE
4
IP static routes
required?
No
Yes
End
Table 21-3 Description of the configuration process of the IP over DCC solution
Number
Description
Issue 03 (2012-11-30)
825
21 Outband DCN
Number
Description
l The DCC protocol type of the line port should be set to IP.
l The DCC channel type of the SDH port should be the same as that of the
third-party equipment. If the network is comprised of only the OptiX
equipment, set the DCC type to D1-D3.
l For the configuration process, see Configuring DCCs.
l In the normal situation, a gateway NE should have the routes to all the
non-gateway NEs that are managed by the gateway NE and the route to
the U2000.
l For the querying process, see Querying IP Routes.
Prerequisites
l
Procedure
Step 1 Select an NE in the NE Explorer. Choose Configuration > NE Attribute from the Function
Tree.
Step 2 Click Modify NE ID. The Modify NE ID dialog box is displayed.
Issue 03 (2012-11-30)
826
21 Outband DCN
l After the operation, the communication between the U2000 and the NE is interrupted.
l After you change the ID of the NE, the original configuration information may be lost and thus the
protection and other features fail to work normally.
----End
Follow-up Procedure
After you change the ID of the NE, a warm reset is performed on the SCC board. In this case,
you need to log in to the NE again after a certain period.
Prerequisites
l
Procedure
Step 1 Select an NE from the Object Tree in the NE Explorer. Choose Communication >
Communication Parameters from the Function Tree.
Step 2 Set the communication parameters of the NE according to the network planning.
Issue 03 (2012-11-30)
827
21 Outband DCN
Prerequisites
l
Context
The NE uses D1-D3 as DCC by default and allows the DCC communication.
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > DCC
Management from the Function Tree.
Step 2 Click the DCC Rate Configuration tab. Then, click New. The system displays the dialog box.
Set the Port, Channel Type, Protocol Type and LAPD Role fields.
Issue 03 (2012-11-30)
828
21 Outband DCN
Step 3 Click OK, and then click OK in the Operation Result dialog box.
Step 4 Optional: Click Query to query DCCs.
Step 5 Optional: Select the required DCC, and modify the parameters according to Table 21-4. Then,
click Apply to finish the modification.
Step 6 Optional: Select the required DCC, and click Delete to delete the DCC.
NOTE
When the board is configured with TPS, the DCCs on the working board are automatically copied to the
protection board. Therefore, the DCCs on the protection board cannot be deleted.
----End
Example
Table 21-4 Parameters
Issue 03 (2012-11-30)
Parameter
Value Range
Default Value
Description
Port
829
21 Outband DCN
Parameter
Value Range
Default Value
Description
Channel Type
D1-D1
It is recommended
that you use the
default value except
in the following
cases:
l When the IP over
DCC solution or
OSI over DCC
solution is
applied, set
Channel Type of
the SDH line
ports to the same
value as the
Channel Type of
third-party
network.
l When the DCC
transparent
transmission
solution is
applied, the
Channel Type of
the SDH line
ports cannot
conflict with the
channel type of
the third-party
network.
Protocol Type
IP
It is recommended
that you use the
default value except
in the following
cases:
l When the
HWECC solution
is applied, set
Protocol Type to
HWECC.
l When the OSI
over DCC
solution is
applied, set
Protocol Type to
OSI.
Issue 03 (2012-11-30)
830
21 Outband DCN
Parameter
Value Range
Default Value
Description
LAPD Role
Network, User
Network
l This parameter is
valid only when
Protocol Type is
set to OSI.
l Set LAPD Role
to User at one end
of a DCC and to
Network at the
other end of the
DCC.
Prerequisites
l
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > IP Stack
Protocol Management from the Function Tree. Click the IP Route Management tab.
Step 2 Check whether the IP routes and their parameters in the routing table are the same as the planning
information.
----End
Prerequisites
l
Context
The OSPF protocol is enabled by default.
Issue 03 (2012-11-30)
831
21 Outband DCN
Procedure
Step 1 Select an NE in the NE Explorer. Choose Communication > IP Stack Protocol
Management from the Function Tree. Click the OSPF Parameter Settings tab.
Step 2 Click Query to check if the OSPF protocol status is normal.
----End
Follow-up Procedure
If the OSPF protocol is incorrect, contact Huawei engineers to adjust the OSPF protocol
parameters used by the NEs.
Prerequisites
l
Context
When the U2000 and the gateway NE are directly connected through LAN and the IP addresses
of the U2000 and the gateway NE are in the same network segment, when the remote NEs
connect to the gateway NE through fibers, and when the IP addresses of the remote NEs, the
gateway NE, and the U2000 are in the same subnet, there are requirements that the U2000
accesses the NEs on the entire network through the gateway NE and the upper layer application
requirement of accessing remote NEs based on the IP network layer. To meet the upper layer
application requirement of accessing remote NEs based on the IP network layer, you need to
enable the proxy ARP of the gateway NE.
Procedure
Step 1 Select an NE in the NE Explorer. Choose Communication > IP Stack Protocol
Management from the Function Tree. Click the Proxy ARP tab.
Step 2 Optional: Click Query.
Step 3 Select Enabled from the drop-down list.
Step 4 Click Apply. Click Close in the Operation Result dialog box.
----End
Follow-up Procedure
After you enable proxy ARP, you need to create a static route for each NE.
Issue 03 (2012-11-30)
832
21 Outband DCN
Prerequisites
l
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > IP Stack
Protocol Management from the Function Tree. Click the IP Route Management tab.
Step 2 Click New. The system displays the Create an IP Route dialog box.
Step 3 Set the parameters of the static IP route.
Parameters
Issue 03 (2012-11-30)
Parameter
Value Range
Default Value
Description
Destination IP
Address
Subnet Mask
This parameter
specifies the subnet
mask of the set
Destination IP
Address.
Gateway IP Address
This parameter
specifies the IP
address of the
gateway to which the
set Destination IP
Address
corresponds, that is,
the next-hop address.
833
21 Outband DCN
NOTE
The created static route has a lower priority than a dynamic route.
Precautions
NOTE
l For the parameters in this example, see the example in Network Planning.
l This example provides only the configurations of the typical NEs: NE101, NE102, NE105, and NE106.
l The following steps are provided to configure the functions one by one. This is for convenience of
description. In an actual configuration, perform the configuration for the NEs one by one.
Procedure
Step 1 Set the IDs of the NEs. See the Configuration Guide.
NE
Extended ID
NE ID
NE101
101
NE102
102
NE105
105
NE106
106
Step 2 Set the IP address information for the NEs. See the Configuration Guide.
NE
IP Address
Subnet Mask
Default Gateway
NE101
100.1.0.101
255.255.255.0
0.0.0.0 (default
value)
NE102
100.0.0.102
255.255.255.0
0.0.0.0 (default
value)
NE105
200.1.0.105
255.255.255.0
0.0.0.0 (default
value)
NE106
200.1.0.106
255.255.255.0
0.0.0.0 (default
value)
Step 3 Set the DCCs for the NEs. See Configuring DCCs.
l Set the DCC protocol type of all the line ports of all NEs to IP(default value).
l Set the channel type of all SDH microwave ports of all NEs to D1-D3.
Issue 03 (2012-11-30)
834
21 Outband DCN
If certain routes are unavailable, request Huawei engineers to adjust the parameters of the OSPF protocol used
by the NEs.
Step 5 Set an IP static route at both NE101 and NE102. See Configuring the IP Static Route for an NE.
l The parameters of the IP static route at NE101 are as follows:
Destination address: 10.0.0.0
Subnet mask: 255.255.0.0
Gateway: 100.1.0.1
l The parameters of the IP static route at NE102 are as follows:
Destination address: 10.0.0.0
Subnet mask: 255.255.0.0
Gateway: 100.1.0.101
NOTE
Set the corresponding route on both the NMS of the third-party equipment and the router.
----End
Result
1.
835
21 Outband DCN
Therefore, additional overheads or service resources are occupied for DCC information
transparent transmission in the networking of equipment from different vendors. The OSI over
DCC solution, however, supports the OSI protocol without occupying additional overheads or
service resources. If the equipment of other vendors supports the OSI over DCC technology, the
management channels can be interconnected.
Qx/MML/SWDL
Transport layer
TP4
Network layer
IS-IS/ES-IS/CLNP
LAPD
DCC
Physical layer
The main function of the physical layer is to provide channels for data transmission, for the data
end equipment.
Physical channels are classified as follows:
l
DCC channel
DCC channels use the DCC bytes in SDH frames or the GCC bytes in OTN frames as the
channels for the communication among NEs.
836
21 Outband DCN
Network layer
The main function of the network layer is to specify the network layer address for a network
entity and to provide the transferring and addressing functions.
The NE adopts the ISO-defined connectionless network service (CLNS) to realize the network
layer function. The CLNS is comprised of the following three protocols:
l
DSP
Higher order DSP
AFI
IDI+pad
DFI
ORG
RES
RD
Area
Area address
1
System ID
9
10
11
12
13
14
15
16
17
NSEL
18
19
20
The NE uses the simplified NSAP address. The simplified NSAP address includes only the
following three parts:
Area ID
The area ID refers to the area address shown in Figure 21-21 and has one to thirteen
bytes. The area ID is used to address the routes between areas. The NSAPs of the NEs
in the same L1 route area must have the same area ID but those in the same L2 route
area can have different area IDs. You can manually set the area ID. The default value
of the area ID is 0x47000400060001.
System ID
The System ID refers to the system ID shown in Figure 21-21 and has six bytes. The
System ID is used to address the routes within an area. The value of the first three bytes
of the System ID of the OptiX equipment is always 0x08003E. The last three bytes are
the NE ID.
NSEL
The NSEL refers to the port ID of the network layer protocol. It has one byte. The NSEL
of the OptiX equipment is always 0x1D.
l
IS-IS protocol
In the CLNS, NEs are classified into intermediate systems (IS) and end systems (ES)
according to the NE role. The IS is equivalent to the router in the TCP/IP protocol stack
and the ES is equivalent to the host.
The IS-IS protocol is a dynamic routing protocol between two ISes. It complies with ISO
10589 and functions as the OSPF protocol in the TCP/IP protocol stack. The IS-IS protocol
supports the L1 and L2 layered routes. The NE whose role is L1 cannot be a neighbor of
Issue 03 (2012-11-30)
837
21 Outband DCN
an NE in a different area and is involved only in the routes in its own area. It issues a default
route that points to its closest L2 NE and accesses other areas through the default route.
The NE whose role is L2 can be a neighbor of the L2 NE in a different area and can also
be involved in the routes in the backbone area. The backbone area is formed by consecutive
L2 NEs. In other words, the L2 NEs in the backbone area must be consecutive (connected).
In the network as shown in Figure 21-22, as the L2 NEs in the backbone area are not
consecutive, the NEs in area 4 are isolated from the NEs in other areas. By default, the role
of the OptiX equipment is L1.
Figure 21-22 Layered routes of IS-IS protocol routes (L2 not consecutive)
OSI DCN
L1
L2
NMS
Area 2
Area 3
Area 1
Backbone
Area 4
NOTE
L2 NEs are classified into two categories, the NE with only the L2 role and the NE with both the L2 role
and the L1 role. Generally, an L2 NE has the L1 role.
ES-IS protocol
The ES-IS protocol is a dynamic routing protocol between the ES and the IS. It complies
with ISO 9542 and functions as the ARP and ICMP protocols in the TCP/IP protocol stack.
Transport layer
The main function of the transport layer is to provide the end-to-end communication service for
the upper layer. The NE adopts the TP4 protocol to realize the transport layer function. The TP4
protocol complies with ISO 8073. It has functions similar to the TCP in the TCP/IP protocol
stack.
Issue 03 (2012-11-30)
838
21 Outband DCN
Access Mode
In the OSI over DCC solution, there are two modes for the U2000 to access an NE, namely,
gateway mode and direct connection mode.
l
Gateway mode
In the gateway mode, the U2000 accesses a non-gateway NE through the gateway NE. The
gateway NE queries the core routing table of the application layer according to the ID of
the NE to be accessed to obtain the corresponding route.
The core routing table synthesizes the transport layer routing tables of all communication
protocol stacks. Each route item includes the following:
ID of the destination NE
Address of the transfer NE
Communication protocol stack of the transfer NE
Transfer distance
The adjacency No. is the ID of an LAPD connection. You can query the link adjacency table of the
data link layer to obtain the mapping relation between the adjacency No. and the LAPD connection.
In the OSI over DCC solution, the U2000 can access any NE by using the direct connection
mode, that is, the U2000 can consider any NE as the gateway NE. To improve the
communication efficiency, there should not be a lot of NEs that are accessed in the direct
connection mode in a network.
TP4 Protocol
The TP4 protocol is located at the transport layer of the OSI protocol stack. The function of the
TP4 protocol is similar to the function of the TCP protocol of the IP of the protocol family.
Issue 03 (2012-11-30)
839
21 Outband DCN
Through the TP4 protocol, a reliable communication connection between two nodes can be
established to process the problems, such as loss, repetition, and juggling of data packets caused
by network fault.
21.4.1.2 Networking
The OSI over DCC protocol is used in the hybrid networking of Huawei equipment and other
equipment that supports the OSI protocol. The typical networking modes of the DCN based on
the OSI over DCC protocol are described as follows.
OSI
protocal stack
Third party
equipment
OSI
protocal stack
Third party
equipment
OSI
protocal stack
In practical application, the networking is more complex, and hybrid networking of equipment from
different suppliers at the network edge and in the backbone networks is frequently adopted.
Issue 03 (2012-11-30)
840
21 Outband DCN
Figure 21-24 Transparent transmission of NM information of the third party equipment (OSI)
Third party
equipment
Third party
equipment
OSI
protocal stack
OSI
protocal stack
OSI Over DCC
Third party
equipment
OSI
protocal stack
Third party
equipment
21.4.2 Availability
The OSI over DCC solution requires the support of the applicable equipment, boards, and
software.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
U2000
841
21 Outband DCN
Hardware Support
Applicable Board
Applicable Version
Applicable Equipment
N4GSCC
N6GSCC
N1PSXCS
N2PSXCSA
T1PSXCSA
R1PCXLN
TNN1PSXCS
TNN1SCA
N1SXCSA
N3PSXCSA
The OSI protocol stack of NEs can communicate with the HWECC protocol stack only in
the same area of the L1 layer.
The OSI protocol stack of NEs cannot communicate with the IP protocol stack.
If DCC bytes are used to transparently transmit NM messages when the OptiX equipment
is used together with the third-party equipment to form a network, you can adopt the OSI
protocol stack to manage the OptiX equipment. It is, however, recommended that you use
the HWECC protocol.
21.4.4 Principles
This topic describes how an NE transfers messages depending on the mode in which the
U2000 accesses an NE. The implementation principles in different modes vary slightly from
each other.
Gateway Mode
Figure 21-25 illustrates how the OSI over DCC solution transfers U2000 messages to a nongateway NE when the U2000 adopts the gateway mode to access the NE.
Issue 03 (2012-11-30)
842
21 Outband DCN
Ethernet
NMS
Application
TP4
Application
TP4
TP4
IS-IS/CLNP
IS-IS/CLNP
LAPD
LAPD
LAPD
DCC
DCC
DCC
Transfer NE
Destination NE
ES-IS/CLNP IS-IS/CLNP
Ethernet
Gateway NE
As an ES, the U2000 first detects the gateway NE through the ES-IS routing protocol,
establishes a TP4 connection, and finally transfers application layer messages to the
gateway NE through the TP4 connection.
2.
The gateway NE extracts the messages from the OSI protocol stack and reports the
messages to the application layer.
3.
The network layer of the gateway NE queries the address of the destination NE in the
messages. If the address of the destination NE is not the same as the address of the local
station, the gateway NE queries the core routing table of the network layer according to the
address of the destination NE to obtain the corresponding route and the communication
protocol stack of the transfer NE. As the communication protocol stack of the transfer NE
in Figure 21-25 is OSI, the gateway NE transfers the messages to the transfer NE through
the OSI protocol stack.
4.
After receiving the packet that encapsulates the messages, the network layer of the transfer
NE queries the destination NSAP address of the packet. If the NSAP address is not the
same as the address of the local station, the transfer NE queries the L1 routing table or the
L2 routing table according to the destination NSAP address to obtain the corresponding
route, and then transfers the packet.
5.
After receiving the packet, the network layer of the destination NE reports the packet to
the application layer through the transport layer because the destination NSAP address of
the packet is the same as the address of the local station. The application layer acts according
to the message sent from the U2000.
Issue 03 (2012-11-30)
843
21 Outband DCN
Application
TP4
TP4
ES-IS/CLNP
IS-IS/CLNP
IS-IS/CLNP
LAPD
LAPD
LAPD
DCC
DCC
DCC
IS-IS/ES-IS/CLNP
Ethernet
Ethernet
NMS
Transfer NE
Transfer NE
Destination NE
The original gateway NE in the direct connection mode acts as an ordinary transfer NE and the
message transferring is implemented at the network layer. This is different from the gateway
mode.
Only the node at the end of a network can be configured as an ES. The limited route
resources of an ES affect the network expansion. Therefore, it is not suggested to configure
the equipment as an ES. The U2000 works as an ES.
L1-IS is the default node type of Huawei products. It supports only intra-area routing (Level
1 routing).
If inter-area routing (Level 2 routing) is required, set the network node type of the equipment
to L2-IS. L2-IS maintains two routing tables at the same time: one is used for intra-area
routing and the other for inter-area routing.
The OptiX equipment supports IS-IS Level 2 routing. When the OSI communication
protocol is used, you need to divide the network into areas according to the network size.
The number of areas, also the number of Level 2 NEs, on the entire DCN cannot exceed
32. A gateway NE is a Level 2 NE. The number of NEs in the same area cannot exceed 50.
Configure the DCN as a ring to ensure that routes can be protected when a fiber is cut or
any anomaly occurs on the NE.
When the OptiX equipment interworks with the third-party equipment, comply with the
design principles for the third-party equipment as well during network planning.
When planning NSAP area addresses and layered routes, comply with the following principles:
Issue 03 (2012-11-30)
844
21 Outband DCN
The OSI protocol supports layered routing function. It uses the SYS ID of the system to achieve
intra-area routing, and uses the AREA ID to achieve inter-area routing. In the DCN planning,
divide the network into areas properly and decide the number of NEs in each area according to
the network topology.
If the number of NEs in a network is smaller than 50, you do not need to divide the network into
areas. In this case, set the node type of all NEs to L1-IS, and set the AREA IDs in the NSAP
area addresses of all NEs to the same value.
In the case of a large-scale network, comply with the following principles to divide the network:
l
Set several NEs in each area to L2-IS. Two NEs in each area are recommended because
the two can be of mutual backup.
When OSI over DCC is used to establish a DCN, the TP4 connection is required between
the U2000 and the gateway NE. The management data that the U2000 sends to non-gateway
NEs is forwarded by the gateway NE. When creating a gateway NE on the U2000, enter
the NE ID and the NSAP address. When creating a non-gateway NE, enter the NE ID and
specify a gateway NE for the NE.
When all nodes in the DCN network run the OSI protocol stack, do not configure all NEs
as gateway NEs. Configure certain NEs as gateway NEs. Configure the other NEs as nongateway NEs and specify a gateway NE for a non-gateway NE. The number of non-gateway
NEs under one gateway NE should not exceed 100, and it is recommended that the
maximum number should be 50. Otherwise, the gateway NE is overloaded, and the
performance of the U2000 is decreased.
Configure an NE that is close to the U2000 as a gateway NE. In this case, the communication
between the U2000 and the gateway NE requires fewer overheads and provides higher
efficiency.
When dividing a network to support layered routes, configure one or several NEs in each
area as gateway NEs. When creating a non-gateway NE, specify a gateway NE in the local
area for the NE.
To ensure the reliable communication between the U2000 and non-gateway NE, a backup
gateway NE is generally specified for a non-gateway NE.
If the network is comprised of only the OptiX equipment, it is recommended that you use
bytes D1 to D3 in SDH frames as DCCs.
If the network is comprised of both OptiX equipment and the third-party SDH equipment,
use the DCC bytes that are used by the third-party equipment (for example, bytes D1 to D3
or D4 to D12) as DCCs.
For the two ends of a DCC, set the LAPD role to User at one end and to Network at the
other end.
845
21 Outband DCN
Figure 21-27 Networking example for the OSI over DCC solution
NE302
NE303
Third-party
NMS
NE304
NE301
NE101
OSI DCN
NMS
NE102
NE201
NE202
Third-party transmission equipment
Fiber
Network cable
Figure 21-27 shows a transmission network that is comprised of both OptiX equipment and the
third-party equipment that supports the OSI over DCC feature. The steps to plan the DCN are
as follows:
1.
Issue 03 (2012-11-30)
846
21 Outband DCN
Area 3
Third-party
NMS
OSI DCN
L2
NMS
L2
L2
L2
Area 2
Area 1
AREA ID: 0x394F1190
2.
3.
Select the router that supports the OSI protocol stack to form the external DCN.
4.
Plan DCCs.
l As the third-party equipment uses the DCC bytes D1 to D3, it is not required to modify
the DCCs of the NEs.
l Set the LAPD role of each DCC to Network at the end nearer to the U2000 and to
User at the other end of the DCC.
Issue 03 (2012-11-30)
847
21 Outband DCN
2
Configure DCCs for
NEs
3
Configure node
types for NEs
4
Configure
communication
protocols and LAPD
roles for optical
interfaces
Query routing
information
6
Create OSI gateway
NEs
End
Table 21-5 Description of the configuration process of the OSI over DCC solution
Remarks
Description
Configure the network-wide data, including the NE IDs and the NE NSAP
addresses.
l The NE ID is set when an NE is created. For the configuration process,
see the Configuration Guide.
l To change the ID of an NE, see Changing the ID of an NE.
l For the process of configuring the NSAP address of an NE, see Setting the
NSAP Address of an NE.
The DCC protocol type of the line port should be set to OSI.
l The DCC channel type of the port should be the same as that of the thirdparty equipment. If the network is comprised of only the OptiX equipment,
set the DCC type to D1-D3.
l For the configuration process, see Configuring DCCs.
Issue 03 (2012-11-30)
848
21 Outband DCN
Remarks
Description
You can set the node type to ES, L1-IS, or L2-IS for an NE according to the
network planning. For the configuration process, see Setting the Node Type
for an NE.
l If the currently enabled protocol is different from the planned one, you
can configure protocol on the U2000.
l For the two ends of a DCC, set the LAPD role to User at one end and to
Network at the other end. For the configuration process, see Configuring
the Communication Protocol Stack and LAPD Role for an Optical
Interface.
l The L1 routing table of the L1 NE has the routes to all the NEs in the area.
l The L1 routing table of the L2 NE has the routes to all the NEs in the area.
The L2 routing table of the L2 NE has the routes to other L2 NEs.
l The gateway NE has the route to the U2000 or the routes to the L2 NEs
that are in the same area as the U2000.
l The gateway NE has the routes to all non-gateway NEs.
l For the querying process, see Query Routing Information.
Prerequisites
l
Procedure
Step 1 Select an NE in the NE Explorer. Choose Configuration > NE Attribute from the Function
Tree.
Step 2 Click Modify NE ID. The Modify NE ID dialog box is displayed.
Issue 03 (2012-11-30)
849
21 Outband DCN
l After the operation, the communication between the U2000 and the NE is interrupted.
l After you change the ID of the NE, the original configuration information may be lost and thus the
protection and other features fail to work normally.
----End
Follow-up Procedure
After you change the ID of the NE, a warm reset is performed on the SCC board. In this case,
you need to log in to the NE again after a certain period.
Prerequisites
l
Procedure
Step 1 Select an NE from the Object Tree in the NE Explorer. Choose Communication >
Communication Parameters from the Function Tree.
Step 2 Set the communication parameters of the NE according to the network planning.
Issue 03 (2012-11-30)
850
21 Outband DCN
Prerequisites
l
Context
If the U2000 establishes a TP4 connection with an NE, after you modify the NSAP area address
that was previously set according to network planning, the communication between the
U2000 and the gateway NE is interrupted. Therefore, you need to re-create the communication.
You can use the U2000 to create a TCP connection with an NE, set the NSAP area address for
the NE, and then modify the connection mode between the U2000 and the NE to TP4.
Issue 03 (2012-11-30)
851
21 Outband DCN
Procedure
Step 1 Select an NE in the NE Explorer. Choose Communication > Communication Parameters
from the Function Tree.
Step 2 Set the NSAP Address.
Step 3 Click Apply and then click OK in the Warning dialog box that is displayed twice. Click
Close in the Operation Result dialog box.
----End
Prerequisites
l
Context
The NE uses D1-D3 as DCC by default and allows the DCC communication.
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > DCC
Management from the Function Tree.
Step 2 Click the DCC Rate Configuration tab. Then, click New. The system displays the dialog box.
Set the Port, Channel Type, Protocol Type and LAPD Role fields.
Step 3 Click OK, and then click OK in the Operation Result dialog box.
Step 4 Optional: Click Query to query DCCs.
Step 5 Optional: Select the required DCC, and modify the parameters according to Table 21-6. Then,
click Apply to finish the modification.
Issue 03 (2012-11-30)
852
21 Outband DCN
Step 6 Optional: Select the required DCC, and click Delete to delete the DCC.
NOTE
When the board is configured with TPS, the DCCs on the working board are automatically copied to the
protection board. Therefore, the DCCs on the protection board cannot be deleted.
----End
Example
Table 21-6 Parameters
Parameter
Value Range
Default Value
Description
Port
Channel Type
D1-D1
It is recommended
that you use the
default value except
in the following
cases:
l When the IP over
DCC solution or
OSI over DCC
solution is
applied, set
Channel Type of
the SDH line
ports to the same
value as the
Channel Type of
third-party
network.
l When the DCC
transparent
transmission
solution is
applied, the
Channel Type of
the SDH line
ports cannot
conflict with the
channel type of
the third-party
network.
Issue 03 (2012-11-30)
853
21 Outband DCN
Parameter
Value Range
Default Value
Description
Protocol Type
IP
It is recommended
that you use the
default value except
in the following
cases:
l When the
HWECC solution
is applied, set
Protocol Type to
HWECC.
l When the OSI
over DCC
solution is
applied, set
Protocol Type to
OSI.
LAPD Role
Network, User
Network
l This parameter is
valid only when
Protocol Type is
set to OSI.
l Set LAPD Role
to User at one end
of a DCC and to
Network at the
other end of the
DCC.
Prerequisites
l
Context
For easy network expansion, the ES type is not recommended.
The setting takes effect only after the NE is reset.
Procedure
Step 1 Select an NE in the NE Explorer. Choose Communication > OSI Management from the
Function Tree.
Issue 03 (2012-11-30)
854
21 Outband DCN
Issue 03 (2012-11-30)
855
21 Outband DCN
Parameters
Parameter
Value Range
Default Value
Description
Configuration Role
L2, L1, ES
L1
l The NE whose
Configuration
Role is set to L1
cannot be a
neighbor of an
NE in a different
area and is
involved only in
the routes in its
own area. It
issues a default
route that points
to its closest L2
NE and accesses
other areas
through the
default route.
l The NE whose
Configuration
Role is set to L2
can be a neighbor
of the L2 NE in a
different area and
can also be
involved in the
routes in the
backbone area.
The backbone
area is formed by
consecutive L2
NEs. That is, the
L2 NEs in the
backbone area
must be
consecutive
(connected).
21.4.6.7 Configuring the Communication Protocol Stack and LAPD Role for an
Optical Interface
If the protocol that is enabled for the optical interface of the NE by default is different from the
planned one, you can configure the protocol on the U2000.
Prerequisites
l
Issue 03 (2012-11-30)
856
21 Outband DCN
Procedure
Step 1 Select an NE in the NE Explorer. Choose Communication > DCC Management from the
Function Tree. Click the DCC Rate Configuration tab.
Step 2 Optional: Click Query. Click Close in the Operation Result dialog box.
Step 3 Select the optical interface that you want to set and click Delete. Click Yes in the Hint dialog
box. Click Close in the Operation Result dialog box.
Step 4 Click New and set the DCC parameters, protocol type, and LAPD role according to the network
planning in the dialog box that is displayed. For details, see Configuring DCCs.
Step 5 Click OK. Click Close in the Operation Result dialog box.
----End
Prerequisites
l
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > OSI
Management from the Function Tree.
Step 2 Click the Routing Table tab.
Step 3 Check whether the information in Link Adjacency Table meets the planning requirements.
Step 4 Click the L1 Routing tab to check whether the information of the L1 routes is correct.
Step 5 Click the L2 Routing tab to check whether the information of the L2 routes is correct.
----End
Follow-up Procedure
If the routing information of the node is incorrect, check the preceding configuration.
Prerequisites
l
Issue 03 (2012-11-30)
857
21 Outband DCN
Procedure
Step 1 Choose Administration > DCN Management from the Main Menu. Click the GNE tab.
Step 2 Right-click a gateway NE and choose Modify GNE from the shortcut menu.
Step 3 In the Modify GNE dialog box, select OSI Gateway from the Gateway Type drop-down list,
and then enter NSAP Address.
Step 4 Click OK and then click Close in the Operation Result dialog box.
----End
Precautions
NOTE
l For the parameters in this example, see the example in Network Planning.
l This example provides only the configurations of the typical NEs: NE101, NE201, NE301, and NE302.
Procedure
Step 1 Set the IDs of the NEs. See Configuration Guide to create an NE.
Issue 03 (2012-11-30)
NE
Extended ID
NE ID
NE101
101
NE201
201
NE301
301
NE302
302
858
21 Outband DCN
Step 2 Set the NSAP address information for the NEs. See Setting the NSAP Address of an NE.
NE
NSAP Address
NE101
394F1190
NE201
394F1210
NE301
394F1200
NE302
394F1220
Step 3 Set the DCCs for the NEs. See Configuring DCCs.
l Set Protocol for all the line ports of all the NEs to OSI.
l Set Channel Type for all the ports of all the NEs to D1-D3.
l Set the LAPD role to Network for the end of a DCC that is nearer to the U2000, and the
LAPD role to User for the other end of the DCC.
Step 4 Configure the CLNS roles for the NEs. See Setting the Node Type for an NE.
l The CLNS role of NE201 and NE302 is L1 (default value).
l The CLNS role of NE101 and NE301 is L2.
Step 5 Query the OSI routes. See Query Routing Information.
l In the L1 routing table, NE101, NE201, NE301, and NE302 have the routes to all the NEs
that are in their respective areas.
l In the L2 routing table, NE101 and NE301 have the routes to other L2 NEs.
----End
Result
All NEs can be successfully created. The communication between the U2000 and NEs is normal.
859
21 Outband DCN
When DCC bytes are used to transparently transmit NM messages, there are two networking
scenarios:
l
D4-D12
D4-D12
D4-D12
D4-D12
D4-D12
D4-D12
OptiX equipment
Third-party
equipment
Issue 03 (2012-11-30)
860
21 Outband DCN
Figure 21-31 DCC transparent transmission solution when the OptiX equipment is at the
edge of a network (2)
D1-D3
D1-D3
D1-D3
D1-D3
D1-D3
D1-D3
Third-party
equipment
OptiX equipment
D1-D3
D1-D3
D1-D3
D1-D3
D1-D3
D1-D3
OptiX equipment
Issue 03 (2012-11-30)
Third-party
equipment
861
21 Outband DCN
D4-D12
D4-D12
D4-D12
D4-D12
D4-D12
D4-D12
OptiX equipment
Third-party
equipment
21.5.2 Availability
The DCC transparent transmission solution requires the support of the applicable equipment,
boards, and software.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
T2000
U2000
862
21 Outband DCN
Applicable Equipment
Applicable Version
U2000
Hardware Support
Applicable Board
Applicable Version
Applicable Equipment
N4GSCC
N6GSCC
N1PSXCS
N2PSXCSA
T1PSXCSA
R1PCXLN
TNN1PSXCS
TNN1SCA
N1SXCSA
N3PSXCSA
21.5.4 Principles
The OptiX equipment realizes the transparent transmission of DCC through the overhead crossconnect matrix.
In the receive direction:
1.
Issue 03 (2012-11-30)
The line board extracts the overhead bytes such as DCC bytes from the received SDH
signals, forms a 2.048 Mbit/s overhead signal stream, and sends the overhead signal stream
to the overhead cross-connect matrix of the SCC board through the overhead bus.
863
21 Outband DCN
2.
The overhead cross-connect matrix transports the DCC bytes that the NE uses to the CPU
and directly transports the DCC bytes that are to be transparently transmitted, to the
overhead bus of the corresponding line board.
3.
The CPU processes the NM messages carried by the DCC bytes according to the protocol
stack of the DCCs.
The CPU of the SCC board encapsulates the NM messages into the DCC bytes according
to the protocol stack and transmits the DCC bytes to the overhead cross-connect matrix of
the SCC board.
2.
The overhead cross-connect matrix combines the DCC bytes sent from the CPU and other
overhead bytes (including the DCC bytes sent from the other line boards and orderwire
bytes) to form a 2.048 Mbit/s overhead signal stream, and then transmits the overhead signal
stream to the corresponding line board.
3.
The line board extracts the overhead signal from the overhead signal stream, inserts the
overhead signal into the SDH signal, and sends the SDH signal to other NEs.
Figure 21-34 illustrates how an NE uses bytes D1 to D3 as DCCs to transparently transmit bytes
D4 to D12.
Figure 21-34 Implementation principle of the DCC transparent transmission
D4-D12
Overhead
bus
SDH
signal
Overhead
bus
SDH
signal
CPU
Line
board
SCC
board
Line
board
864
21 Outband DCN
If the third-party equipment uses bytes D1 to D3 as DCCs, the port of the OptiX NE uses
bytes D4 to D12 as DCCs.
If the third-party equipment uses bytes D4 to D12 as DCCs, the port of the OptiX NE uses
bytes D1 to D3 as DCCs.
Third-party
NMS
NE1
NE4
NMS
NE2
NE3
Network cable
Fiber
Figure 21-35 shows a transmission network that is comprised of both OptiX equipment and the
third-party equipment that supports the DCC transparent transmission feature. The procedure
for planning the DCN is as follows:
1.
Plan DCCs.
As the third-party equipment uses bytes D1 to D3 as DCCs, the OptiX NE uses bytes D4
to D12 as DCCs.
2.
Issue 03 (2012-11-30)
Allocate IDs to the NEs and configure the IP address for the gateway NE.
865
21 Outband DCN
Third-party
NMS
NMS
9-1
10.0.0.1
0.0.0.0
9-4
129.9.0.4
0.0.0.0
9-2
129.9.0.2
0.0.0.0
9-3
129.9.0.3
0.0.0.0
Extended ID-Basic ID
IP address
Gateway
3.
Configure NE1, NE2, NE3, and NE4 to transparently transmit bytes D1 to D3.
Issue 03 (2012-11-30)
866
21 Outband DCN
3
Set DCC transparent
transmission
4
Query ECC routes at
the gateway NE
End
Table 21-7 Description of the configuration process of the DCC transparent transmission
solution
Number
Description
l When setting the IP address information for the gateway NE, you may
need to set the default gateway in addition to setting the IP address and
subnet mask, according to the situation of the external DCN.
l For the process of configuring the ID of an NE, see Changing the ID of an
NE.
l For the process of configuring the IP address of an NE, see Setting
Communication Parameters of an NE.
l The DCC protocol type of the line port should be set to HWECC.
l When the third-party equipment uses bytes D1 to D3 as DCCs, set the
DCC channel type of the SDH port to D4-D12.
l When the third-party equipment uses bytes D4 to D12 as DCCs, set the
DCC channel type of the SDH port to D1-D3.
l For the configuration process, see Configuring DCCs.
Issue 03 (2012-11-30)
867
21 Outband DCN
Number
Description
l There is an ECC route between the gateway NE and each of its managed
non-gateway NEs.
l For the querying process, see Querying the ECC Routes of an NE.
Prerequisites
l
Procedure
Step 1 Select an NE in the NE Explorer. Choose Configuration > NE Attribute from the Function
Tree.
Step 2 Click Modify NE ID. The Modify NE ID dialog box is displayed.
l After the operation, the communication between the U2000 and the NE is interrupted.
l After you change the ID of the NE, the original configuration information may be lost and thus the
protection and other features fail to work normally.
----End
Follow-up Procedure
After you change the ID of the NE, a warm reset is performed on the SCC board. In this case,
you need to log in to the NE again after a certain period.
868
21 Outband DCN
Prerequisites
l
Procedure
Step 1 Select an NE from the Object Tree in the NE Explorer. Choose Communication >
Communication Parameters from the Function Tree.
Step 2 Set the communication parameters of the NE according to the network planning.
Prerequisites
l
Issue 03 (2012-11-30)
869
21 Outband DCN
Context
The NE uses D1-D3 as DCC by default and allows the DCC communication.
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > DCC
Management from the Function Tree.
Step 2 Click the DCC Rate Configuration tab. Then, click New. The system displays the dialog box.
Set the Port, Channel Type, Protocol Type and LAPD Role fields.
Step 3 Click OK, and then click OK in the Operation Result dialog box.
Step 4 Optional: Click Query to query DCCs.
Step 5 Optional: Select the required DCC, and modify the parameters according to Table 21-8. Then,
click Apply to finish the modification.
Step 6 Optional: Select the required DCC, and click Delete to delete the DCC.
NOTE
When the board is configured with TPS, the DCCs on the working board are automatically copied to the
protection board. Therefore, the DCCs on the protection board cannot be deleted.
----End
Example
Table 21-8 Parameters
Issue 03 (2012-11-30)
Parameter
Value Range
Default Value
Description
Port
870
21 Outband DCN
Parameter
Value Range
Default Value
Description
Channel Type
D1-D1
It is recommended
that you use the
default value except
in the following
cases:
l When the IP over
DCC solution or
OSI over DCC
solution is
applied, set
Channel Type of
the SDH line
ports to the same
value as the
Channel Type of
third-party
network.
l When the DCC
transparent
transmission
solution is
applied, the
Channel Type of
the SDH line
ports cannot
conflict with the
channel type of
the third-party
network.
Protocol Type
IP
It is recommended
that you use the
default value except
in the following
cases:
l When the
HWECC solution
is applied, set
Protocol Type to
HWECC.
l When the OSI
over DCC
solution is
applied, set
Protocol Type to
OSI.
Issue 03 (2012-11-30)
871
21 Outband DCN
Parameter
Value Range
Default Value
Description
LAPD Role
Network, User
Network
l This parameter is
valid only when
Protocol Type is
set to OSI.
l Set LAPD Role
to User at one end
of a DCC and to
Network at the
other end of the
DCC.
Prerequisites
l
The equipment must be installed according to the planning. The connections of the cables
and fibers are correct.
The NE must be created on the U2000. The communication between the U2000 and the
NE must be normal.
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > DCC
Management from the Function Tree.
Step 2 Click the DCC Transparent Transmission Management tab.
Step 3 Click New. The system displays the Create DCC Transparent Transmission Byte dialog box.
Step 4 Set the parameters of the DCC transparent transmission byte.
Step 5 Click OK, and then click OK in the Operation Result dialog box.
----End
Issue 03 (2012-11-30)
872
21 Outband DCN
Parameters
Parameter
Value Range
Default Value
Description
Source Timeslot/
Port
Line ports
Transparent
Transmission of
Overhead Byte at
Source Port
D1
l Only one
overhead byte can
be selected at a
time.
l X1, X2, X3, and
X4 represent the
self-defined
overhead bytes
that are used
when
asynchronous
data services are
transmitted.
l The overhead
byte must not be a
used byte (for
example, the byte
used by a DCC
that is in use).
Sink Timeslot/Port
Issue 03 (2012-11-30)
Line ports
873
21 Outband DCN
Parameter
Value Range
Default Value
Description
Transparent
Transmission of
Overhead Byte at
Sink Port
D1
l Only one
overhead byte can
be selected at a
time.
l The overhead
byte must not be a
used byte (for
example, the byte
used by a DCC
that is in use).
l Generally,
Transparent
Transmission of
Overhead Byte
at Sink Port is set
to the same value
as Transparent
Transmission of
Overhead Byte
at Source Port.
These two
parameters,
however, can be
set to different
values.
Prerequisites
l
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > NE ECC
Link Management from the Function Tree.
Step 2 Click Refresh and the ECC routes of the NE is displayed in the NE ECC Link Management
Table.
Step 3 Check whether the ECC routes and their parameters in NE ECC Link Management Table are
the same as the planned information.
Issue 03 (2012-11-30)
874
21 Outband DCN
Step 4 Optional: Click Add Manual Route to add manual ECC routes of the NE.
----End
Precautions
NOTE
l For the parameters in this example, see the example in Network Planning.
l This example provides only the configurations of the typical NEs: NE1 and NE3.
Procedure
Step 1 Set the IDs of the NEs. See Changing the ID of an NE.
l NE1
NE ID: 1; extended ID: 9
l NE3
NE ID: 3; extended ID: 9
Step 2 Set the IP address information for NE1. See Setting Communication Parameters of an NE.
l IP address: 10.0.0.1
l Subnet mask: 255.255.0.0
l Default gateway: 0.0.0.0 (default value)
Step 3 Set the DCCs for the NEs. See Configuring DCCs.
l Set the DCC protocol type of all the line ports of all NEs to HWECC.
l Set the DCC channel type of all SDH ports of all NEs to D4-D12.
Step 4 Configure the transparent transmission between bytes D1 to D3 of the west line port of an NE
and bytes D1 to D3 of the east line port of the NE. Perform the configuration for all NEs. See
Configuring DCC Transparent Transmission.
Step 5 Query the ECC routes on NE1. See Querying the ECC Routes of an NE.
NE1 should have ECC routes to NE2, NE3, and NE4.
----End
Issue 03 (2012-11-30)
875
21 Outband DCN
Issue 03 (2012-11-30)
876
21 Outband DCN
Figure 21-38 Networking based on transparent transmission of DCC bytes through the external
clock interface in direct access mode
External clock
interface
External clock
interface
E1 interface
E1 interface
DCC bytes
Third-party network
E1 cable
DCC bytes
E1 cable
NE1
NE2
DCC bytes
E1 cable
SDH interface
SDH interface
Third-party network
NE1
DCC bytes
NE2
21.6.2 Availability
The DCC transparent transmission through the external interface solution requires the support
of the applicable equipment and boards.
Issue 03 (2012-11-30)
877
21 Outband DCN
Version Support
Applicable Equipment
Applicable Version
T2000
U2000
Applicable Equipment
Applicable Version
T2000
U2000
Hardware Support
Applicable Board
Applicable Version
Applicable Equipment
R1PCXLN
Remarks
On the OptiX OSN 1500, the AUX board provides the external clock interface.
21.6.4 Principles
The OptiX equipment uses the overhead processing unit and the CPU to transparently transmit
DCC bytes through the external clock interface.
In the receive direction:
1.
The external clock interface regards the received E1 services as 2.048 Mbit/s overhead
signals, and sends them over the overhead bus to the overhead processing unit.
2.
The overhead processing unit sends the DCC bytes that the local NE uses to the CPU.
3.
The CPU processes the network management messages carried by the DCC bytes according
to the protocol stack for the DCC.
878
21 Outband DCN
1.
The CPU encapsulates the network management messages into the DCC bytes according
to the protocol stack for the DCC, and sends them to the overhead processing unit.
2.
The overhead processing unit combines the DCC bytes received from the CPU and other
overhead bytes (such as overhead bytes for orderwire and synchronous/asynchronous data
services) into 2.048 Mbit/s overhead signals, and sends them to the external clock interface.
3.
The external clock interface sends the 2.048 Mbit/s overhead signals as E1 services to a
peripheral device.
Figure 21-40 shows the mechanism of transparent transmission of DCC bytes through the
external clock interface. As shown in the figure, the D1-D3 bytes are used to carry network
management messages.
Figure 21-40 Mechanism of transparent transmission of DCC bytes through the external clock
interface
E1 services
External
clock
interface
Auxiliary
board
Overhead
bus
Overhead
processing
unit
D1D3
CPU
SCC
2.
Issue 03 (2012-11-30)
Planning of the DCC byte for carrying the network management messages:
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
879
21 Outband DCN
The two NEs at the two ends should use the same DCC bytes that are transmitted
transparently over the third-party network. The D1-D3 bytes are recommended.
3.
Planning of transmission of E1 services that carry the DCC bytes over the third-party
network:
The two NEs at the two ends should use the same DCC bytes. The DCC bytes are transmitted
transparently over the third-party network between the two NEs. The D1-D3 bytes are
recommended.
2.
Plan the DCC byte for carrying the network management messages.
The D1-D3 bytes are used to transmit network management messages between NE1 and
NE2.
3.
Plan transmission of E1 services that carry the DCC bytes over the third-party network.
Between NE1 and the third-party network, the DCC bytes travel over an E1 cable. Between
NE2 and the third-party network, the DCC bytes travel over a fiber.
l The E1 services that carry the DCC bytes of NE1 travel from NE1 to the third-party
network through the external clock interface and over the E1 cable.
l The E1 services are transmitted transparently over the third-party network.
l The E1 services that carry the DCC bytes of NE2 travel from NE2 through the external
clock interface and over the E1 cable to the tributary board where the E1 services are
encapsulated into a VC-12 that is transmitted to the third-party network.
Figure 21-41 Example of networking for planning transparent transmission of DCC bytes
through the external clock interface
E1 interface
SDH interface
NM
DCC bytes
E1 cable
SDH interface
E1 interface
Third-party network
DCC bytes
E1 cable
NE1
NE2
External clock
interface
Issue 03 (2012-11-30)
External clock
interface
880
21 Outband DCN
Indirect
access mode
adopted?
No
Yes
Configure
cross-connections
for E1 services
End
Issue 03 (2012-11-30)
881
21 Outband DCN
Table 21-9 Description of the configuration process of the DCC transparent transmission
through the external clock interface solution
Procedure
Notes
NOTE
Configure cross-connections for the E1 service only when the indirect
access mode is adopted.
l For details on the configuration process, see Configuring a CrossConnection for E1 Services.
Prerequisites
l
Context
The NE uses D1-D3 as DCC by default and allows the DCC communication.
Procedure
Step 1 Select the NE from the Object Tree in the NE Explorer. Choose Communication > DCC
Management from the Function Tree.
Step 2 Click the DCC Rate Configuration tab. Then, click New. The system displays the dialog box.
Set the Port, Channel Type, Protocol Type and LAPD Role fields.
Issue 03 (2012-11-30)
882
21 Outband DCN
Step 3 Click OK, and then click OK in the Operation Result dialog box.
Step 4 Optional: Click Query to query DCCs.
Step 5 Optional: Select the required DCC, and modify the parameters according to Table 21-10. Then,
click Apply to finish the modification.
Step 6 Optional: Select the required DCC, and click Delete to delete the DCC.
NOTE
When the board is configured with TPS, the DCCs on the working board are automatically copied to the
protection board. Therefore, the DCCs on the protection board cannot be deleted.
----End
Example
Table 21-10 Parameters
Issue 03 (2012-11-30)
Parameter
Value Range
Default Value
Description
Port
883
21 Outband DCN
Parameter
Value Range
Default Value
Description
Channel Type
D1-D1
It is recommended
that you use the
default value except
in the following
cases:
l When the IP over
DCC solution or
OSI over DCC
solution is
applied, set
Channel Type of
the SDH line
ports to the same
value as the
Channel Type of
third-party
network.
l When the DCC
transparent
transmission
solution is
applied, the
Channel Type of
the SDH line
ports cannot
conflict with the
channel type of
the third-party
network.
Protocol Type
IP
It is recommended
that you use the
default value except
in the following
cases:
l When the
HWECC solution
is applied, set
Protocol Type to
HWECC.
l When the OSI
over DCC
solution is
applied, set
Protocol Type to
OSI.
Issue 03 (2012-11-30)
884
21 Outband DCN
Parameter
Value Range
Default Value
Description
LAPD Role
Network, User
Network
l This parameter is
valid only when
Protocol Type is
set to OSI.
l Set LAPD Role
to User at one end
of a DCC and to
Network at the
other end of the
DCC.
Prerequisites
l
Procedure
Step 1 Click the NE in the NE Explorer, and choose Configuration > SDH Service Configuration
from the Function Tree.
Step 2 Click New, and set the necessary parameters in the Create SDH Service dialog box displayed.
Issue 03 (2012-11-30)
885
21 Outband DCN
Prerequisites
l
The external clock interface planned for transparent transmission of DCC information must
not be used as a clock signal interface.
You must be an NM user with "NE and network operator" authority or higher.
For the parameters in this example, see the example in Network Planning.
Context
Procedure
Step 1 Set the DCCs for the NEs. See Configuring DCCs.
l Set Channel Type to D1-D3.
l Set Protocol Type to HWECC.
Step 2 Configure cross-connections for E1 services on NE2. See Configuring a Cross-Connection for
E1 Services.
----End
21.7.1 Troubleshooting
When the DCN is faulty, the U2000 is unreachable. This topic describes the troubleshooting of
common DCN faults.
886
21 Outband DCN
Symptom
On the U2000, a single NE is unreachable and the other NEs of the same site are reachable.
Impact on System
The unreachable NE cannot be managed.
Possible Causes
l
Cause 6: The data communications channel (DCC) of the optical interface is disabled.
Cause 8: The network size is too large, and the embedded control channel (ECC)
communication between NEs exceeds the processing capability of the NEs.
U2000 LCT
U2000
Procedure
Step 1 Cause 1: The password for an NE user is incorrect.
1.
Check whether the entered password is correct. If the password is incorrect, enter the correct
password.
2.
Check whether the NE recovers. If the NE does not recover, check whether the fault occurs
due to other causes.
2.
Issue 03 (2012-11-30)
Check whether the external power supply device is working in the normal state.
If...
Then...
887
3.
21 Outband DCN
If...
Then...
Then...
Reset the system control board. Then, check whether the services are restored. If the
services are not restored, check whether the fault occurs due to other causes.
2.
Replace the faulty board. Then, check whether the services are restored. If the services are
not restored, check whether the fault occurs due to other causes.
Step 4 Cause 4: The IP address or the ID of the NE is incorrectly set. As a result, the NE is
unreachable.
1.
Log in to the NE through the U2000 LCT and restore the original IP address and ID of the
NE according to the record.
2.
Check whether the NE recovers. If the NE does not recover, check whether the fault occurs
due to other causes.
Use the optical time domain reflectometer (OTDR) meter to test the fiber. Analyze the line
attenuation curve displayed by the meter to determine whether fiber cut occurs on the line.
For how to use the meter, see the OTDR user guide.
If...
Then...
888
1.
21 Outband DCN
Then...
Browse alarms and determine the board that reports the alarm.
2.
Load the NE software to the board, and perform a warm reset on the board. Then, check
whether the alarm is cleared.
3.
Step 8 Cause 8: The network size is too large, and the ECC communication between NEs exceeds
the processing capability of the NEs.
1.
Check the number of NEs at a single station. When multiple devices are connected through
a hub (or by means of inter-subrack cascading ports) and communicate with each other by
using the extended ECC function of the network port, it is recommended that you enable
automatic extended ECC function for not more than four devices connected to the same
hub and that you enable the manual extended ECC function if more than four devices are
connected to the same hub, to prevent ECC storms.
2.
Contact Huawei technical support engineers to check whether the data communication
network (DCN) communication between NEs exceeds the processing capability of the NEs.
3.
Check whether the NE recovers. If the NE does not recover, check whether the fault occurs
due to other causes.
Step 9 Cause 9: The D1-D3 bytes for DCC communication are deleted.
1.
2.
3.
Check whether the services are restored. If the services are not restored, contact Huawei
technical support engineers.
----End
Related Information
None.
Symptom
On the U2000, all NEs of a certain subnet are unreachable.
Issue 03 (2012-11-30)
889
21 Outband DCN
Impact on System
The unreachable NEs cannot be managed.
Possible Causes
l
Cause 3: The network cables connections between the gateway NE and the hub are
interrupted or the hub ports are faulty.
Cause 7: The network size is too large, and the embedded control channel (ECC)
communication between NEs exceeds the processing capability of the NEs.
U2000 LCT
Procedure
Step 1 Cause 1: The password for an NE user is incorrect.
1.
Check whether the entered password is correct. If the password is incorrect, enter the correct
password.
2.
Check whether the NE recovers. If the NE does not recover, check whether the fault occurs
due to other causes.
Step 2 Cause 2: The gateway NE is faulty, and there is no standby gateway NE. As a result, the
NEs are unreachable.
1.
2.
Issue 03 (2012-11-30)
Check whether the external power supply device is working in the normal state.
If...
Then...
Then...
890
3.
4.
21 Outband DCN
If...
Then...
Then...
Then...
Step 3 Cause 3: The network cables connections between the gateway NE and the hub are
interrupted or the hub ports are faulty. As a result, the NE is unreachable.
1.
2.
Issue 03 (2012-11-30)
Check whether the network cables between the gateway NE and the hub are connected
improperly.
If...
Then...
Use the cable testing meter to check whether the network cables are normal.
If...
Then...
891
3.
21 Outband DCN
If...
Then...
Then...
Step 4 Cause 4: The IP addresses or the IDs of the gateway NE is changed by mistake. As a result,
the NE is unreachable.
1.
Log in to the NE through the U2000 LCT and restore the original IP addresses and IDs of
the NEs according to the record.
2.
Check whether the NEs recover. If the NEs do not recover, check whether the fault occurs
due to other causes.
Step 5 Cause 5: The fibers are faulty or connected improperly. As a result, the NEs are
unreachable.
1.
Use the optical time domain reflectometer (OTDR) meter to test the fiber. Analyze the line
attenuation curve displayed by the meter to determine whether fiber cut occurs on the line.
For how to use the meter, see the OTDR user guide.
If...
Then...
2.
3.
Check whether the NEs recover. If the NEs do not recover, check whether the fault occurs
due to other causes.
Reset the system control board. Then, check whether the services are restored. If the
services are not restored, proceed to the next step.
2.
Replace the faulty system control board, and check whether the services are restored. If the
services are not restored, check whether the fault occurs due to other causes.
Step 7 Cause 7: The network size is too large, and the ECC communication between NEs exceeds
the processing capability of the NEs.
1.
Issue 03 (2012-11-30)
Check the number of NEs at a single station. When multiple devices are connected through
a hub (or by means of inter-subrack cascading ports) and communicate with each other by
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
892
21 Outband DCN
using the extended ECC function of the network port, it is recommended that you enable
automatic extended ECC function for not more than four devices connected to the same
hub and that you enable the manual extended ECC function if more than four devices are
connected to the same hub, to prevent ECC storms.
2.
Contact Huawei technical support engineers to check whether the data communication
network (DCN) communication between NEs exceeds the processing capability of the NEs.
3.
Check whether the NE recovers. If the NE does not recover, check whether the fault occurs
due to other causes.
----End
Related Information
To prevent all NEs of the subnet from becoming unreachable due to the fault of the gateway NE,
it is recommended to install a backup gateway NE in the subnet.
Symptom
Symptom 1: NEs are frequently unreachable to the NMS.
Symptom 2: NEs become unreachable after they are logged in for a while.
Impact on System
The unreachable NEs cannot be managed.
Possible Causes
l
Cause 2: The scale of the network is over-large, and the ECC communication between NEs
exceeds the processing capability of the NEs.
Cause 3: On the network, more than one user logs in to the NE.
U2000
Procedure
Step 1 Cause 1: The IP addresses or IDs of the NEs conflict. As a result, the NEs are unreachable
frequently.
1.
Issue 03 (2012-11-30)
Check whether the IP addresses or IDs of the NEs conflict with those of other NEs or
devices on the network.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
893
21 Outband DCN
If...
Then...
Step 2 Cause 2: The scale of the network is over-large, and the ECC communication between NEs
exceeds the processing capability of the NEs. As a result, the NEs are unreachable
frequently.
1.
Check whether the scale of the network is very large. If it is very large, divide the network
appropriately into subnets.
2.
Check the number of the NEs at a single site. When multiple devices are connected through
the hub (or inter-subrack cascading) and communicate with each other through the extended
ECC function of the network port, it is recommended that on the same hub, a maximum of
four devices be enabled with the automatically extended ECC function; the extended ECC
function can be enabled for a maximum of four devices that are connected to the same hub
to avoid the ECC storm.
3.
Check whether the number of NEs managed by the gateway NE exceeds the DCC
processing capability of the SCC board. If it exceeds, replace the SCC board with a more
powerful SCC board.
4.
Check whether the NEs recover. If the NEs do not recover, check whether the fault is due
to other causes.
Step 3 Cause 3: On the network, more than one user logs in to the NE. As a result, the NE is
unreachable frequently.
1.
2.
Check whether the NE recovers. If the NE does not recover, check whether the fault is due
to other causes.
Step 4 Cause 4: The SCC board is faulty. As a result, the NE is unreachable frequently.
1.
Perform a cold reset on the SCC board or reseat the SCC board.
2.
Check whether the NE recovers. If the NE does not recover, replace the faulty board.
3.
Check whether the NE recovers. If the NE does not recover, check whether the fault is due
to other causes.
----End
Related Information
None
894
21 Outband DCN
MC-B8 Communication Between Equipment and NMS Fails After Standby CXL Board Is
Switched to Main Mode
MC-B9 Fibers Are Not Connected Between Equipment but ECC Communication Between
Equipment Is Available
MC-B26 Fault in Standby XCE Board Results in Abnormal Communication of AUX board
in Extended Subrack
MC-B29 Adding Boards Fails Due to Incorrect Use of EXT Port on AUX Board
MC-B60 Excessive NEs on the ECC Network Causes an ECC Storm on the Entire Network
After the Route Changes
MC-B66 ECC Storm Occurs when Active/Standby Switching of SCC Boards Is Triggered
by Removing SCC Board on OptiX OSN 3500
MC-B70 The NE Becomes Unreachable for the NMS After the Active/Standby Switching
Because the NE ID and IP Address of the Standby System Control Board Are Changed
MC-B76 Communication Anomaly Occurs Between the Equipment and the NMS Due to
the Setting of the Firewall
Issue 03 (2012-11-30)
895
22
896
Issue 03 (2012-11-30)
897
MPLS Tunnel
Ethernet Link
Service
DCN information
Issue 03 (2012-11-30)
898
Network layer
Media access
layer
Physical layer
Physical layer
HW ECC
protocol
Transport layer
OSI model
Physical Layer
The main function of the physical layer is to control the physical channel, and receive and
transmit the data in the physical channel. The physical layer performs the following
functions:
The physical layer receives the data in the physical channel and transfers the data to the
upper layer.
The physical layer receives the data frame transferred from the upper layer and sends
it to the physical channel.
l
Issue 03 (2012-11-30)
899
The main function of the media access layer (MAC layer) is to activate or close physical
the DCCs between the physical layer and the network layer. The MAC layer performs the
following functions:
The MAC layer receives the data frame transferred from the physical layer. If the
destination address is the same as the address of the local station, the MAC layer
transfers the data frame to the network layer. Otherwise, the MAC layer discards the
data frame.
The MAC layer sends the data frame from the network layer. If the destination address
of the data frame has a MAC connection, the MAC layer sends the data frame to the
corresponding physical channel at the physical layer through the MAC connection.
Otherwise, the MAC layer discards the data frame.
l
Network Layer
The main function of the network layer (NET layer) is to provide the route addressing
function for data frames and the route management (including the route creation and
maintenance) function for the ECC network.
The NET layer receives the packet transferred from the MAC layer. If the destination
address of the packet is the same as the address of the local station, the NET layer
transfers the packet to the transport layer. Otherwise, the NET layer requests the MAC
layer to transfer the packet to the transfer station according to the routing entry that
matches the destination address in the NET layer routing table.
The NET layer sends the packet from the transport layer. The NET layer requests the
MAC layer to transfer the packet to the transfer station according to the routing entry
that matches the destination address of the packet in the NET layer routing table.
Transport Layer
The main function of the transport layer is to provide the end-to-end communication service
for the upper layer. The communication between the OptiX equipment and the NMS is
controlled by the end-to-end connection-oriented service at the application layer. Hence,
the transport layer provides only the end-to-end connectionless communication service,
namely, transparent data transfer service.
IP Protocol Stack
Figure 22-3 shows the architecture of the IP protocol stack.
Figure 22-3 Architecture of the IP protocol stack
Routing protocol
OSPF
Transport layer
TCP/UDP
Network layer
IP
Issue 03 (2012-11-30)
PPP
Ethernet
PPPoE
FE/GE
900
Physical Layer
The main function of the physical layer is to provide channels for data transmission for the
data end equipment. The Ethernet physical channel provided by the FE or GE interface
functions as the channel on the physical layer.
Network Layer
The main function of the network layer is to specify the network layer address for a network
entity and to provide the transferring and addressing functions.
The NE uses the IP protocol and the corresponding ARP and ICMP protocols to realize the
network layer functions.
Transport Layer
The main function of the transport layer is to provide the end-to-end communication service
for the upper layer.
The NE supports the connection-oriented TCP protocol and the connectionless-oriented
UDP protocol.
Routing Protocols
Routing protocols belong to the scope of the application layer. The NE supports Open
Shortest Path First (OSPF).
The OSPF protocol is a dynamic routing protocol based on the link status. The OSPF
protocol divides an autonomous system (AS) into several areas. Route nodes exchange
routing information in an area. The route nodes at the edge of an area make a summary and
exchange information with the routers in other areas. Areas are identified by area IDs. The
area ID has the same format as the IP address.
Currently, the OSPF protocol used by the OptiX equipment supports only the routes within
an area and does not support the routes between areas. Hence, the gateway NE and all its
managed non-gateway NEs must be in the same OSPF area. By default, the OSPF protocol
is enabled for the line port of the OptiX equipment but is disabled for the Ethernet port.
Hence, to form a network through the Ethernet port, you need to modify the OSPF setting
of the NE.
In addition to the dynamic routing protocol, the NE supports static routes. Static routes
refer to the routes that are manually configured. Static routes have a higher priority than
dynamic routes. When there is a route conflict, the equipment always selects static routes
first.
901
ID of the destination NE
Transfer distance
Destination IP address
Subnet mask
Interface
When the NMS directly accesses an NE, there must be an IP route between the NMS and the
NE.
Issue 03 (2012-11-30)
902
Inband DCN
packets
Packet
Switch
Network
LAN switch
Router
FE/GE
Access
enabled
Ethernet
service packets
After the access control function is enabled on an Ethernet service port on the gateway NE:
l
The IP address of this network management port can be specified based on network
requirements but cannot be in the same network segment with IP addresses of NEs.
DCN packets transmitted or received by this network management port carry VLAN IDs
used on the inband DCN. Therefore, the VLAN tags need to be stripped off by a device
(for example, the LAN switch in the figure) before the DCN packets reach the NMS.
The NMS can communicate with the gateway NE using the IP address of this network
management port.
22.4 Availability
The inband DCN function requires the support of the applicable equipment, boards, and
software.
Version Support
Issue 03 (2012-11-30)
Applicable Equipment
Applicable Version
T2000
U2000
903
Applicable Equipment
Applicable Version
U2000
Applicable Equipment
Applicable Version
U2000
Hardware Support
Applicable Board
Applicable Version
Applicable Equipment
N1PEG16
N1PEX1
N1PETF8
R1PEFS8
Q1PEGS2
R1PEGS1
N1PEG8
N2PEX1
N1PEX2
N1PEFF8
R1PEF4F
TNN1EX2
TNN1EG8
TNN1ETMC
TNN1EFF8
Issue 03 (2012-11-30)
904
Remarks
VLAN ID and
bandwidth
Issue 03 (2012-11-30)
905
Applicable
Object
Remarks
Protocol type
Issue 03 (2012-11-30)
906
Applicable
Object
Remarks
Access control
Issue 03 (2012-11-30)
907
Applicable
Object
Remarks
DCN (inband
DCN using IP
protocols)
Issue 03 (2012-11-30)
908
Applicable
Object
Remarks
Issue 03 (2012-11-30)
909
Applicable
Object
Remarks
Inband DCN
traversing a
third-party
network
Maintenance Principles
None.
22.6 Principles
DCN packets carry management information for the U2000 to manage equipment. To achieve
the DCN function, DCN channels must be set up between the U2000 and NEs to transmit DCN
packets.
The NMS transfers the application layer message to the gateway NE through the TCP
connection between them.
2.
The gateway NE extracts the message from the TCP/IP protocol stack and reports the
message to the application layer.
3.
The application layer of the gateway NE queries the address of the destination NE in the
message. If the address of the destination NE is not the same as the address of the local
station, the gateway NE queries the core routing table of the application layer according to
the address of the destination NE to obtain the corresponding route and the communication
protocol stack of the transfer NE.
4.
On the packet that encapsulates the message is received, the NET layer of the transfer NE
queries the address of the destination NE of the packet. If the address of the destination NE
is not the same as the address of the local station, the transfer NE queries the NET layer
routing table according to the address of the destination NE to obtain the corresponding
route and then transfers the packet.
5.
After the packet is received, the NET layer of the destination NE reports the packet to the
application layer through the L4 layer because the address of the destination NE of the
packet is the same as the address of the local station. The application layer performs
operations according to the message sent from the NMS.
Issue 03 (2012-11-30)
910
Application
Application
TCP
TCP
L4
IP
IP
NET
NET
NET
MAC
MAC
MAC
Ethernet
Ethernet
FE/GE
FE/GE
FE/GE
Transfer NE
Destination NE
NMS
Gateway NE
L4
NOTE
The MAC refers to the media access layer, NET refers to the network layer, and L4 refers to the transport
layer.
The NMS forwards application-layer packets to the gateway NE through the TCP/IP
connection between them.
2.
The gateway NE extracts the packets from the TCP/IP protocol stack and transmits them
to the application layer.
3.
The application layer of the gateway NE processes the packets according to the destination
address.
4.
Issue 03 (2012-11-30)
a.
If the destination address differs from the address of the local station, the gateway NE
queries the corresponding route from the application-layer core routing table
according to the destination address.
b.
The gateway NE encapsulates the packets into Ethernet data frames, adds VLAN IDs
to identify the packets as DCN packets, and forwards the DCN packets to the
corresponding port.
The transfer NE processes the Ethernet data packets according to their VLAN IDs.
a.
Data packets with management VLAN IDs are transmitted to the upper layer as DCN
packets for protocol processing. Data packets with non-management VLAN IDs are
forwarded as service packets.
b.
The network layer of the transfer NE parses the destination IP address of the DCN
packets. If the destination IP address differs from the IP address of the local station,
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
911
the transfer NE queries the corresponding route from the routing table generated by
the OSPF protocol and forwards the packets to the corresponding port.
5.
After receiving the Ethernet data packets, the destination NE transmits the DCN packets
with management VLAN IDs to the upper layer for protocol processing. The destination
IP address of the DCN packets is the IP address of the local station. Hence, the DCN packets
are transmitted from the transport layer to the application layer. The application layer
performs operations according to messages carried in the packets from the NMS.
Figure 22-6 Packet forwarding using the IP protocol stack (access through a gateway NE)
Application
Application
Application
TCP
TCP
TCP
IP
IP
IP
IP
IP
PPP
PPP
PPP
PPPoE
PPPoE
PPPoE
FE/GE
FE/GE
FE/GE
Transfer NE
Destination NE
Ethernet
Ethernet
NMS
TCP
Gateway NE
NOTE
Direct Access
Figure 22-7 illustrates how packets are forwarded using the IP protocol stack when the NMS
accesses NEs directly.
Figure 22-7 Packet forwarding using the IP protocol stack (direct access)
Application
Application
TCP
TCP
IP
IP
Ethernet
NMS
Ethernet
IP
IP
PPP
PPP
PPP
PPPoE
PPPoE
PPPoE
FE/GE
FE/GE
FE/GE
Transfer NE
Destination NE
Transfer NE
In this access mode, the gateway NE functions as an ordinary NE and forwards packets at the
network layer.
Issue 03 (2012-11-30)
912
MPLS Tunnel
Ethernet Link
Service
DCN information
913
Prerequisites
You must be an NM user with "NE operator" authority or higher.
Background Information
l
If the default VLAN ID of the DCN conflicts with the VLAN ID used by the service, change
the VLAN ID of the DCN manually. Ensure that all the DCN channels use the same VLAN
ID.
If the DCN packets do not use all the available bandwidth, the idle bandwidth can be shared
with the service packets.
It is recommended that you change the VLAN ID of the DCN on the non-gateway NEs
before changing the VLAN ID of the DCN on the gateway NE. Otherwise, the non-gateway
NEs may be unreachable to the NMS.
Procedure
Step 1 In the NE Explorer, select an NE and choose Communication > DCN Management from the
Function Tree.
Step 2 Click the Bandwidth Management tab and set the parameters.
NOTE
l If you click Default, the corresponding parameter automatically takes the default value.
l Generally, the default VLAN ID is recommended. When the VLAN ID used by a service conflicts with
the VLAN ID used by a DCN channel, you can define another VLAN ID for the DCN channel. Ensure
that the VLAN ID of the DCN channel is the same as the VLAN IDs of the other DCN channels.
Prerequisites
You must be an NM user with "NE operator" authority or higher.
914
Background Information
NOTE
l You need to set the NMS access parameters only when the equipment accesses the NMS by using an
Ethernet service board.
l By default, Enabled Status is Disabled.
CAUTION
When the DCN port is interconnected with the NMS, the IP address of the NMS computer and
the IP address of the NNI on the equipment cannot be in the same network segment.
Procedure
Step 1 In the NE Explorer, select an NE and choose Communication > DCN Management from the
Function Tree.
Step 2 Click the Access Control tab and set the parameters.
Prerequisites
You must be an NM user with "NE operator" authority or higher.
Procedure
Step 1 In the NE Explorer, select an NE and choose Communication > DCN Management from the
Function Tree.
Step 2 Click the Port Settings tab.
Step 3 In the corresponding Enabled Status field of the related port, select Enabled.
Issue 03 (2012-11-30)
915
NOTE
When you configure an Ethernet service that exclusively occupies a port, disable the DCN function of the
port.
Prerequisites
You must be an NM user with "NE operator" authority or higher.
Procedure
Step 1 In the NE Explorer, select an NE and choose Communication > DCN Management from the
Function Tree.
Step 2 Click the Protocol Settings tab. Set Protocol Type of the corresponding port.
NOTE
Protocol Type is set to the IP protocol by default. The HWECC protocol is an internal protocol of Huawei.
For communications between the OptiX OSN equipment, the IP protocol or the HWECC protocol can be
used. For communications between the OptiX OSN equipment and the PTN equipment or the third-party
equipment, only the IP protocol can be used.
916
NE1 and NE3 are connected to the NMS through the NE2. The network management
information and service information share the channel for transport.
129.9.0.10
NE 2
NE 1 5-N1PEG16-1
(Port1)
129.9.0.2
5-N1PEG16-2
(Port2)
129.9.0.1
5-N1PEG16-1
(Port1)
5-N1PEG16 -1
(Port1)
Fiber
NE 3
129.9.0.3
Ethernet link
Service Planning
The engineering planning department should plan the engineering according to the related
requirements, and output detailed planning information. Figure 22-9 is considered as an example
to show the service planning. Table 22-1 shows how to plan the parameters for the inband DCN
configuration.
Issue 03 (2012-11-30)
917
Table 22-1 Planning of the parameters for the inband DCN configuration
Attribute
Value
Station
NE1
NE2
NE3
Ethernet
Board
VLAN ID
4094
4094
4094
Bandwidth
(kbps)
512
512
512
Port
Settings
Enabled
Status
Enabled
Enabled
Enabled
Protocol
Settings
Protocol
Type
IP
IP
IP
Bandwidth
Manageme
nt
Prerequisites
l
You must understand the networking, requirements and service planning of the example.
Procedure
Step 1 Set the communication parameters for each NE by using the U2000. For details, see Setting
Communication Parameters of an NE.
The communication parameters for NE1 are as follows:
l IP address: 129.9.0.2
l Subnet Mask: 255.255.0.0
The communication parameters for NE2 are as follows:
l IP address: 129.9.0.1
l Subnet Mask: 255.255.0.0
The communication parameters for NE3 are as follows:
l IP address: 129.9.0.3
l Subnet Mask: 255.255.0.0
Step 2 Set NE2 as GNE for NE1 and NE3, For the configuration method, see Changing the GNE for
NEs.
Step 3 Configure the bandwidth of the inband DCN on NE1, NE2, and NE3 by using the U2000. For
the configuration method, see Setting the VLAN ID and Bandwidth Used by the Inband DCN.
The parameters related to the bandwidth of inband DCN of each NE are set as follows:
Issue 03 (2012-11-30)
918
After you complete the preceding steps, log in to each NE by using the U2000 to ensure that communication
is normal.
----End
Prerequisites
The inband DCN must be configured.
Background Information
You can perform the verification according to the following aspects:
l
On the U2000, create a non-gateway NE. After being created successfully, the non-gateway
NE is reachable to the U2000 and can upload data to the U2000 normally.
On the U2000, query the DCN management data of the non-gateway NE to check whether
the configuration data of the inband DCN is correct.
After you change the settings of the parameters for the NE, such as the DCN protocol mode,
the DCN network communications is normal.
Procedure
Step 1 On the U2000, create a non-gateway NE. After being created successfully, the non-gateway NE
can log in to and upload data to the U2000 normally.
NOTE
In the case of new equipment, set Gateway Type to Non-Gateway, and then set Affiliated Gateway to
the gateway NE on the inband DCN.
Step 2 On the U2000, query the DCN management data of the non-gateway NE to check whether the
configuration data of the inband DCN is correct.
1.
Issue 03 (2012-11-30)
919
2.
3.
Click Refresh and check whether the Communication Status of the non-gateway NE is
Normal.
Step 3 After you change the settings of the parameters for the NE, such as the DCN protocol mode, the
DCN network communications is normal.
NOTE
On the network where inband DCN communication is performed, the parameters of all the NEs must be
the same. You need to change the parameters such as the DCN protocol mode of non-gateway NEs before
changing the parameters such as the DCN protocol mode of the gateway NEs.
1.
In the Main Topology, right-click the NE that you want to configure and choose NE
Explorer from the shortcut menu.
2.
In the NE Explorer, select the NE that you want to configure. Then, choose
Configuration > Communication > DCN Management.
3.
4.
Click Apply. Then, the Operation Result dialog box is displayed, indicating that the
operation is successful.
----End
22.11 Troubleshooting
Following a certain troubleshooting flow facilitates your fault locating and rectification.
Symptom
The common faults regarding the inband DCN are as follows:
l
The communication between the U2000 and NEs is interrupted, and the NE icons on the
U2000 become unavailable.
No response is returned after a command is issued on the U2000. If the wait time before a
response lasts more than 2 minutes, it indicates that the communication between the
U2000 and an NE is interrupted.
Impact on System
l
After the communication between an NE and the U2000 is interrupted, the NEs that
communicate with the U2000 through the NE also become unreachable, if they fail to
connect to the U2000 by means of other methods. The other NEs, however, are not affected.
Possible Causes
l
Cause 1: The physical link between the faulty NE and the U2000 is disconnected.
Cause 2: The ports of the faulty NE on the inband DCN are disabled.
Cause 3: The DCN VLAN IDs of the NEs at both ends are not the same.
Cause 4: The NEs at both ends of an optical fiber use different protocols.
Cause 5: The faulty NE fails to obtain DCN packets due to loss of received signals or
extremely low receive optical power.
Issue 03 (2012-11-30)
920
Cause 7: No response is returned for inband DCN packets, because the SCC board on the
faulty NE is being reset or active/standby protection switching occurs on another board.
Cause 8: The bandwidth of the inband DCN channel is extremely low or other services
preempt the bandwidth of the inband DCN channel.
NOTE
If communication between NEs and the U2000 is interrupted, rectify the fault on the gateway NE
prior to the fault on the non-gateway NE.
If the communication between NEs and the U2000 is normal, rectify the fault on the non-gateway
NE prior to the fault on the gateway NE. This method ensures that the unaffected NEs can keep
communicating with the U2000 during the troubleshooting process.
Troubleshooting Flow
Following a certain troubleshooting flow facilitates your fault locating and rectification.
Issue 03 (2012-11-30)
921
Yes
The communication
between the T2000 and
the NE is interrupted.
Yes
Reconnect the
Ethernet cable or
fiber.
No
Yes
Enable the port
on the DCN.
No
Yes
Is the received
signal lost?
No
No
Is the board
faulty?
No response is
Yes
command on the
T2000.
Yes
Replace the board.
No
Is the information
queried on the
T2000 lost?
Yes
The bandwidth
of the DCN path
is very low.
No
Is the fault
rectified?
Yes
End
Procedure
Step 1 Cause 1: The physical link between the faulty NE and the U2000 is disconnected.
1.
Check whether the Ethernet cable or fiber of the faulty NE is disconnected from the port.
If yes, reconnect it.
Step 2 Cause 2: The ports of the faulty NE on the inband DCN are disabled.
1.
Issue 03 (2012-11-30)
Check whether the DCN function of the port that is connected to the fiber is enabled by
default. If not, connect the fiber to a port whose DCN function is enabled by default.
922
2.
Check whether the DCN access function is enabled for the ports at both ends of the link.
If not, enable DCN access function for the ports.
Step 3 Cause 3: The DCN VLAN IDs of the NEs at both ends are not the same.
1.
Check whether the DCN VLAN ID of the local NE is the same as the DCN VLAN ID of
the opposite NE. If not, change the DCN VLAN ID of the local NE to the DCN VLAN ID
of the opposite NE.
Step 4 Cause 4: The NEs at both ends of an optical fiber use different protocols.
1.
Check whether the local NE uses the same protocol as the opposite NE. If not, change the
protocol that the local NE uses to the same as the opposite NE does.
Step 5 Cause 5: The faulty NE fails to obtain DCN packets due to loss of received signals or extremely
low receive optical power.
1.
Check whether alarms such as ETH_LOS, and ETH_LINK_DOWN are reported on the
board that is configured with the inband DCN channel. If yes, see the Alarms and
Performance Events Reference to clear these alarms.
Query whether the HARD_BAD alarm is reported on the board that is configured with the
inband DCN channel. If yes, see the Alarms and Performance Events Reference to clear
the alarm.
Step 7 Cause 7: No response is returned for inband DCN packets, because the SCC board on the faulty
NE is being reset or active/standby protection switching occurs on another board.
1.
Check whether the PROG indicator on the system control and communication (SCC) board
blinks in green color. If yes, it indicates that the SCC board is being reset. When the PROG
indicator is steady green, it indicates that the SCC board completes resetting. Then, the NE
is automatically reconnected to the DCN.
2.
3.
Step 8 Cause 8: The bandwidth of the inband DCN channel is extremely low or other services preempt
the bandwidth of the DCN channel.
1.
If the number of services that are configured for the port exceeds the specified value, part
of the query result may be lost. To solve the problem, you need to increase the bandwidth
of the inband DCN channel properly. For details, see Setting the VLAN ID and Bandwidth
Used by the Inband DCN.
Step 9 If the fault persists, contact Huawei technical support engineers to handle the fault.
----End
923
Table 22-2 lists the parameters for configuring in-band DCN bandwidth management.
Table 22-2 Parameters for configuring in-band DCN bandwidth management
Field
Value
Description
2 to 4094
Default: 4094
64 to 2048
Default: 512
Issue 03 (2012-11-30)
Field
Value
Description
Port Name
924
Field
Value
Description
Enabled Status
Enabled, Disabled
l Enabled: The
management information
of the U2000 can be
received through this
port.
l Disabled: The
management information
of the U2000 cannot be
received through this
port.
IP Address
Value
Description
Port Name
Enabled, Disabled
l Enabled: Management
information between NEs
can be transmitted
through this port.
l Disabled: Management
information between NEs
cannot be transmitted
through this port.
Enabled, Disabled
Default value: Enabled
Enabled, Disabled
Default value: Enabled
Issue 03 (2012-11-30)
925
Value
Description
Port Name
Example: 21-N1PETF8-1
(PROT-1)
Protocol Type
IP, HWECC
Default value: IP
Description
DCN_FAIL
DCNSIZE_OVER
Issue 03 (2012-11-30)
926
A List of Parameters
List of Parameters
This topic describes all the parameters for configuring and querying the common boards and
functions on the U2000. Each parameter is described in terms of the description, impact on the
system, values, configuration guidelines, and relationship with other parameters.
A.1 Ethernet Port Associated Parameters (Packet Mode)
Before configuring an Ethernet service, you need to set the parameters associated with Ethernet
ports.
A.2 Ethernet Port Associated Parameters (TDM Mode)
Before configuring an Ethernet service, you need to configure the relevant parameters for
Ethernet ports.
A.3 Ethernet Service Associated Parameters (Packet Mode)
This topic describes the parameters for configuring Ethernet services.
A.4 CES Service Associated Parameters
This topic describes the parameters for configuring CES services.
A.5 Data Service Associated Parameters (TDM Mode)
This topic describes the parameters for configuring Ethernet services and SAN services.
A.6 ETH OAM Associated Parameters (Packet Mode)
This topic describes the parameters that are used for enabling the ETH-OAM function.
A.7 ETH-OAM Associated Parameters (TDM Mode)
This topic describes the parameters for configuring the ETH-OAM function.
A.8 PW Associated Parameters
This topic describes the parameters for configuring PW services.
A.9 PW APS Protection Associated Parameters (Packet Mode)
This topic describes the parameters for configuring PW APS protection.
A.10 HQoS Associated Parameters
This topic describes the parameters that are used for enabling the HQoS function.
A.11 QoS Associated Parameters
This topic describes the parameters for configuring the QoS function
A.12 ATM Interface Associated Parameters (Packet Mode)
This topic describes the parameters for configuring ATM interface.
Issue 03 (2012-11-30)
927
A List of Parameters
Issue 03 (2012-11-30)
928
A List of Parameters
Values
Valid Values
Default Value
Non-Loopback
Description
Non-Loopback
Inloop
Outloop
Configuration Guidelines
To set the MAC loopback, decide the loopback direction according to the service direction when
you locate the faults.
929
A List of Parameters
Related Information
Loopb
ack
Point
Meaning
Diagram
MAC
Loopba
ck
Values
Value Range
Default Value
Non-Loopback
Issue 03 (2012-11-30)
Value
Description
Non-Loopback
Inloop
Outloop
930
A List of Parameters
Configuration Guidelines
The PHY loopback is mainly used to locate a fault. When setting this parameter, determine the
loopback type according to the service flow direction.
Related Information
Loopb
ack
Point
Loopback Point
PHY
loopbac
k
Diagram
Values
Value Range
Default Value
Enabled, Disabled.
Enabled
Issue 03 (2012-11-30)
Value
Description
Enabled
Disabled
931
A List of Parameters
Configuration Guidelines
When a port is used for transmitting services, enable this port first.
Values
Value Range
Default Value
Description
Null
802.1Q
QinQ
When the Ethernet port is used for QinQ Link, the port
attribute should be set to Layer 2, and the encapsulation type
should be set to QinQ. In addition, QinQ Type Domain of
the two interconnected Ethernet ports should be set to the
same value.
Configuration Guidelines
Currently, the encapsulation type of the Ethernet port can be set. In addition, the encapsulation
type can be switched only when the port does not carry services.
In the Layer 2 mode, the encapsulation type of the Ethernet port can be Null, 802.1Q, and
QinQ. In the Layer 3 mode, the encapsulation type of the Ethernet port, which is fixed to
802.1Q, cannot be set.
932
A List of Parameters
Related Information
For details, see the Port Mode parameter.
Values
Value Range
Default Value
10M Half-Duplex, 10M Full-Duplex, AutoNegotiation, 100M Half-Duplex, 100M FullDuplex, 1000M Full-Duplex, 10G Full-Duplex
LAN, 10G Full-Duplex WAN
Auto-Negotiation
Issue 03 (2012-11-30)
Value
Description
10M Half-Duplex
10M Full-Duplex
Auto-Negotiation
100M Half-Duplex
The maximum transmission rate of the port is 100 Mbit/s and the
communication mode is half-duplex.
100M Full-Duplex
The maximum transmission rate of the port is 100 Mbit/s and the
communication mode is full-duplex.
1000M Full-Duplex
The maximum transmission rate of the port is 1000 Mbit/s and the
communication mode is full-duplex.
933
A List of Parameters
Value
Description
Configuration Guidelines
The Auto-Negotiation working mode is recommended. If the communication fails and the
working mode of the port is set to Auto-Negotiation, you need specify the working mode of the
port according to the working mode of the interconnected port.
If the working mode of the port is set to any other mode instead of Auto-Negotiation, the working
mode of the interconnected port should be the same. Otherwise, the communication is not
available.
In the case of equipment interconnection, set the communication modes of the interconnected
ports to full-duplex.
Related Information
None.
Values
Value Range
Default Value
64-9600
1620
Configuration Guidelines
The maximum data packet length has a filtering mechanism, through which this parameter is set
to filter the data packets received on the Ethernet port of a length longer than a certain length.
Issue 03 (2012-11-30)
934
A List of Parameters
When setting this parameter, consider the length of the data packets transmitted from the opposite
end. If the parameter value is less than the length of the data packets transmitted from the opposite
end, this link cannot normally transmit service packets.
The maximum data packet length defines the maximum bytes in a packet that is allowed
by an Ethernet port. All packets whose packet length is larger than the maximum data packet
length will be discarded by the Ethernet port.
The service MTU defines the maximum data packet length allowed by a service. All packets
whose length is larger than the MTU will be discarded.
When both the maximum data packet length and the service MTU are configured, the
smaller value takes effect.
Values
Value Range
Default Value
Enabled, Disabled
Enabled
Description
Enabled
Disabled
Configuration Guidelines
When the services are configured, the MPLS should not be disabled.
935
A List of Parameters
Related Information
The MPLS is a standard routing and switching technology platform, which supports various
high layer protocols and services.
The MPLS uses short and fixed-length tags to encapsulate various link layer packets. Based on
the IP routing and control protocol, the MPLS provides switching to the connection at the
network layer.
Values
Value Range
Default Value
Manually, Unspecified
Unspecified
Description
Manually
Unspecified
Configuration Guidelines
None.
Issue 03 (2012-11-30)
936
A List of Parameters
Values
Board Name
Valid Values
Default Value
N2EMR0, N2EGR2,
N4EFS0, N5EFS0,
N2EFS4, N3EFS4,
N2EGS2, N3EGS2,
N1EFS0A, N1EFS0,
N2EFS0, N1EFS4,
N2EFS4, N1EMR0,
N2EMR0, N5EFS0
PE, P
PE
N1EMS4, N1EGS4,
N3EGS4, N4EGS4,
N1EAS2
UNI
N1EMS2
PE
Issue 03 (2012-11-30)
Value
Description
PE
937
A List of Parameters
Value
Description
UNI
C-Aware
S-Aware
Configuration Guidelines
The port attribute depends on the port position in the network and the service. For this reason,
select a proper port attribute as required. Generally, select the default value.
l
For the MPLS service, select P for the port that transmits or receives packets with MPLS
labels.
For the QinQ service, select C-Aware or S-Aware for the port. Connecting to the port of
the client network, a C-Aware port identifies and processes the packets with C-VLAN
labels. Connecting to the port at the network side, an S-Aware port identifies and processes
the packets with S-VLAN tags. The configuration examples are described as follows:
Add the S-VLAN tag to the service from Port A to Port B, and remove the S-VLAN
tag from the service from Port B to Port A. Then select C-Aware for Port A, and SAware for Port B.
Configure a service from Port A to Port B to transparently transmit the C-VLAN tags
at the client side. Then select C-Aware for Ports A and B.
Configure a service from Port A to Port B to transparently transmit the S-VALN tags
at the network side. Then select S-Aware for Ports A and B.
Configure a service from Port A to Port B to switch the C-VLAN tags at the client side.
Then select C-Aware for Ports A and B.
Configure a service from Port A to Port B to switch the S-VALN tags at the network
side. Then select S-Aware for Ports A and B.
Related Information
According to the position and role of the equipment in the networking, there are three types of
equipment: CE, PE (U-PE & N-PE), and P. Client Edge (CE) indicates the equipment at the
Issue 03 (2012-11-30)
938
A List of Parameters
client side. Provider Edge (PE) indicates the edge equipment at the network side. Provider (P)
indicates the intermediate node at the network side.
Values
Valid Values
Default Value
Enabled, Disabled
Disabled
Description
Enabled
Disabled
Configuration Guidelines
l
If the port does not bear a service, set the port attribute to Disabled.
Issue 03 (2012-11-30)
939
A List of Parameters
Values
Board Name
Valid Values
Default Value
Unit
N1EGS4, N3EGS4
1518-9216, in step
length of 2
1522
Byte
N1EMS4, N4EGS4
1518-9216, in step
length of 1
1522
Byte
N1EFT8, N1EFT8A,
R1EFT4
1518-1535, in step
length of 1
1522
Byte
N1EGT2, N1EAS2,
N2EMR0, N2EGR2,
N4EFS0, N5EFS0,
N2EFS4, N3EFS4,
N2EGS2, N3EGS2,
N1EFS0A, N1EMS2
1518-9600, in step
length of 1
1522
Byte
Configuration Guidelines
Set this parameter as required. Generally, select the default value, unless otherwise specified.
Values
Valid Values
Default Value
Disable
940
A List of Parameters
Value
Description
Disable
Send Only
Receive Only
Indicates that the port can only process the received PAUSE
frames.
Configuration Guidelines
This parameter is meaningful only when you configure the EPL service. You can select the value
as required.
Values
Board Name
Valid Values
Default Value
N1EMS4, N1EGS4,
N3EGS4, N1EAS2
Enable Symmetric/
Dissymmetric Flow Control
Enable Symmetric/
Dissymmetric Flow Control
N2EMR0, N2EGR2,
N1EFS0, N2EFS0, N4EFS0,
N5EFS0, N2EFS4, N3EFS4,
N2EGS2, N3EGS2,
N1EFS0A, N1EMS2
Disabled
Disabled
Issue 03 (2012-11-30)
941
A List of Parameters
Board Name
Valid Values
Default Value
N4EGS4
Disabled
Enable Symmetric/
Dissymmetric Flow Control
Description
Disabled
Indicates that the port only transmits flow control frames, but
does not process the received flow control frames.
Enable Symmetric/
Dissymmetric Flow Control
Configuration Guidelines
For the N1EMS4, N1EGS4, N3EGS4, N1EAS2 boards, this parameter is valid only when you
configure the EPL service.
Generally, set this parameter to Enable Symmetric/Dissymmetric Flow Control, unless
otherwise specified.
Issue 03 (2012-11-30)
942
A List of Parameters
Values
Valid Values
Default Value
Non-Loopback, Inloop
Non-Loopback
Description
Non-Loopback
Inloop
Configuration Guidelines
None.
Values
Valid Values
Default Value
Non-Loopback, Inloop
Non-Loopback
Issue 03 (2012-11-30)
943
A List of Parameters
Value
Description
Non-Loopback
Inloop
Configuration Guidelines
None.
If QinQ Type Area is set to 0x8100 at the local end, QinQ Type Area must be set to
0x8100 at the opposite end. Otherwise, services are unavailable.
If QinQ Type Area is set to a value other than 0x8100 from 0x600 to 0xFFFF at the local
end, QinQ Type Area must be set to 0x8100 or to the same value at the opposite end.
Otherwise, services are unavailable.
Values
Value Range
Default Value
0x600-0xFFFF
0x8100
Configuration Guidelines
Set this parameter according to the supported value of QinQ Type Area of the opposite
equipment.
Issue 03 (2012-11-30)
944
A List of Parameters
For the external physical interface of the board, the transmit direction is connected to the
receive direction by a fiber.
The two external physical ports on the board are cross-connected to each other through
fibers.
Values
Valid Values
Default Value
Enabled, Disabled
Disabled
Configuration Guidelines
To check the self-loop port, select Enabled.
945
A List of Parameters
Values
Valid Values
Default Value
Enabled, Disabled
Enabled
Configuration Guidelines
To block a self-loop port, select Enabled. Otherwise, select Disabled.
Values
Valid Values
Default Value
Unit
Mbps
Configuration Guidelines
Generally, select the value according to the bandwidth.
946
A List of Parameters
basis of the traffic proportion at the port. If the bandwidth allocated to the broadcast packets
reaches the specified threshold, the port discards the broadcast data packets that are received.
If less bandwidth is allocated to the broadcast packets, some necessary broadcast services
are affected.
Values
Value Range
Default Value
1-10
You can set this parameter according to the percentage of the traffic at the port. The value 10
means that the whole bandwidth is allocated to the port.
Configuration Guidelines
Generally, use the default value.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
Enabled, Disabled
Disabled
947
A List of Parameters
Configuration Guidelines
You can set this parameter according to whether to control the traffic of the broadcast packets.
Values
Value Range
Default Value
Enabled, Disabled
Disabled
Description
Enabled
Disabled
Configuration Guidelines
Set this parameter according to the actual requirement of the user. Set this parameter to
Enabled if the traffic on a port needs to be monitored.
948
A List of Parameters
Values
Value Range
Default Value
Unit
1-30
min
Configuration Guidelines
The user can set this parameter according to the actual service requirement.
Values
Value Range
Default Value
0 to 65535
34928
Configuration Guidelines
The value of this parameter must be the same as the value of the accessed jumbo frame type.
Otherwise, the Ethernet board does not consider the frame as a jumbo frame.
949
A List of Parameters
Values
Valid Values
Default Value
1-4095
Configuration Guidelines
Allocate the default VLAN ID according to the networking plan of the service carrier.
If Tag is set to Access for a port, packets without VLAN IDs are added with the default
VLAN IDs when they enter the port. After these packets are transmitted from the port, their
VLAN IDs are peeled off.
If Tag is set to Hybrid for a port, packets without VLAN IDs are added with the default
VLAN IDs when they enter the port. After these packets are transmitted from the port, the
VLAN IDs are peeled off if they are the same as the default VLAN IDs. Otherwise, these
packets are directly transmitted.
If Tag is set to Tag Aware for a port, packets without VLAN IDs are discarded before they
enter the port. Otherwise, these packets are directly transmitted.
Values
Issue 03 (2012-11-30)
Valid Values
Default Value
0-7
950
A List of Parameters
Configuration Guidelines
Set the VLAN priority according to the service requirements and the allocation of the service
carrier.
Values
Valid Values
Default Value
Enabled, Disabled
Enabled
Description
Enabled
The port checks the Tag label. In this case, the Tag attribute
of the port is valid.
Disabled
The port does not check the tag label. In this case, the Tag
attribute of the port is invalid.
Configuration Guidelines
l
To transmit the data packet transparently, the user can disable the entry detection function.
To forward the data packet according to the contents of the data packet, the user can enable
the entry detection function.
Issue 03 (2012-11-30)
951
A List of Parameters
Values
Value Range
Default Value
Tag Aware
Description
Access
Tag Aware
Hybrid
Configuration Guidelines
The tag attributes are configured for MAC ports and VCTRUNK ports. Hence, the VCTRUNK
ports at both ends of the trunk link can be configured with the tag attributes. In the case of a link,
Issue 03 (2012-11-30)
952
A List of Parameters
the services are available only when the parameters of the tag attributes are the same for the
VCTRUNK ports on the source and sink ports. No requirements are proposed for the tag
attributes of MAC ports.
If Tag is set to Access for a port, packets without VLAN IDs are added with the default
VLAN IDs when they enter the port. After these packets are transmitted from the port, their
VLAN IDs are peeled off.
If Tag is set to Hybrid for a port, packets without VLAN IDs are added with the default
VLAN IDs when they enter the port. After these packets are transmitted from the port, the
VLAN IDs are peeled off if they are the same as the default VLAN IDs. Otherwise, these
packets are directly transmitted.
If Tag is set to Tag Aware for a port, packets without VLAN IDs are discarded before they
enter the port. Otherwise, these packets are directly transmitted.
For C-Aware and S-Aware ports, the tag attribute cannot be set.
Related Information
Mapping relationship between the packets handled by the port and the tag identifiers
Packet Type
Handling Method
Tag aware
Access
Hybrid
Tag aware
Access
Hybrid
953
A List of Parameters
Values
Table A-1 shows the value range of each type of board.
Table A-1 The mapping protocol supported by each type of board
Board Name
Value Range
Default Value
l GFP
GFP
GFP
GFP
N2EMR0, N2EGR2
l GFP
GFP
l LAPS
l HDLC
l LAPS
Description
GFP
LAPS
HDLC
Configuration Guidelines
The value of Mapping Protocol for VCTRUNK of the local equipment must be the same as
that of Mapping Protocol for the VCTRUNK of the interconnected equipment.
Issue 03 (2012-11-30)
954
A List of Parameters
A.2.22 Scramble
Description
The Scramble parameter specifies whether to scramble the payload area of the encapsulation
protocol and the scramble mode.
Values
Table A-2 shows the value range of each type of board.
Table A-2 Scramble supported by each type of board
Board Name
Value Range
l Unscramble
N2EMR0, N2EGR2
Scramble mode[X43+1]
l Scramble mode[X43+1]
Default
Value
Scramble mode
[X43+1]
Scramble mode
[X43+1]
Description
Unscramble
Scramble mode[X43+1]
Configuration Guidelines
The value of Scramble for VCTRUNK must be the same as that of Scramble for the VCTRUNK
of the interconnected equipment.
955
A List of Parameters
Values
Value Range
Default Value
Yes, No
Yes
Description
Yes
No
Configuration Guidelines
The value of Set Inverse Value for CRC for VCTRUNK of the local equipment must be the
same as that of Set Inverse Value for CRC for the VCTRUNK of the interconnected equipment.
956
A List of Parameters
Values
Table A-3 shows the value range of each type of board.
Table A-3 The length of the CRC field supported by each type of board
Mapping
Protocol
Value Range
Default Value
N4EFS0, N5EFS0,
N2EFS4, N3EFS4,
N2EGS2, N3EGS2,
N1EFS0A, N1EMS2,
N1EAS2
GFP
l FCS32
FCS32
N1EMS4, N1EGS4,
N3EGS4, N4EGS4,
N1EGT2, N2EGT2
GFP
Board Name
N2EMR0, N2EGR2,
N1EGT2, N1EFT8,
N1EFT8A, R1EFT4
l No
l FCS32
FCS32
l No
LAPS
FCS32
HDLC
FCS32
GFP
l FCS32
FCS32
l No
LAPS
FCS32
Description
No
The protocol frame does not contain the CRC field. Only the
GFP protocol supports this option.
FCS32
Configuration Guidelines
If Mapping Protocol is set to HDLC or LAPS, the value of Check Field Length must be
consistent for the interconnected VCTRUNKs at the two ends.
957
A List of Parameters
Values
Table A-4 shows the value range of each type of board.
Table A-4 FCS calculated bit sequence supported by each type of boards
Board Name
N1EMS4, N1EGS4,
N3EGS4, N4EGS4,
N1EGT2, N2EGT2
Mapping
Protocol
Value Range
Default Value
GFP
Big endian
l Mapping Protocol:
GFP
l FCS Calculated Bit
Sequence: Big endian
l LAPS
Little endian
l HDLC
l Mapping Protocol:
GFP
l FCS Calculated Bit
Sequence: Big endian
N2EMR0, N2EGR2
GFP
Big endian
l Mapping Protocol:
GFP
l FCS Calculated Bit
Sequence: Big endian
LAPS
Little endian
Big endian
l Mapping Protocol:
GFP
l FCS Calculated Bit
Sequence: Big endian
N4EFS0, N5EFS0,
N2EFS4, N3EFS4,
N2EGS2, N3EGS2,
N1EFS0A, N1EMS2,
N1EAS2
GFP
Big endian
N1EFT8, N1EFT8A,
R1EFT4
l GFP
Big endian
l LAPS
Little endian
l Mapping Protocol:
GFP
l FCS Calculated Bit
Sequence: Big endian
l HDLC
l Mapping Protocol:
GFP
l FCS Calculated Bit
Sequence: Big endian
Issue 03 (2012-11-30)
Value
Description
Big endian
958
A List of Parameters
Value
Description
Little endian
Configuration Guidelines
The value of FCS Calculated Bit Sequence for the VCTRUNK of the local equipment must
the same as that of FCS Calculated Bit Sequence for the VCTRUNK of the interconnected
equipment.
Values
Table A-5 shows the value range of each type of board.
Table A-5 Extension header option supported by each type of board
Board Name
Value Range
Default Value
N4EFS0, N2EFS4,
N2EGS2, N1EFT8,
N1EFT8A, N1EGT2,
N2EGT2, R1EFT4,
N1EAS2, N2EMR0,
N2EGR2
l Yes
No
N1EMS4, N1EGS4,
N3EGS4, N4EGS4,
N5EFS0, N3EFS4,
N3EGS2, N1EFS0A,
N1EMS2
No
l No
No
959
A List of Parameters
Value
Description
Yes
No
Configuration Guidelines
Select the value according to whether the GFP protocol is required to support the extension
header.
Values
Value Range
Default Value
Static
Issue 03 (2012-11-30)
Value
Description
Static
960
A List of Parameters
Configuration Guidelines
None.
Values
Value Range
Default Value
Description
PW
Port
The bearer is the physical port, and the slot ID and port
number needs to be specified.
QinQ Link
The bearer is the QinQ link, and the QinQ link ID needs to be
specified.
Configuration Guidelines
l
The bearer of the E-Line service V-NNI can be the PW, port, or QinQ link.
The bearer of the E-LAN service V-NNI can be the PW, port, or QinQ link.
961
A List of Parameters
Values
Value Range
Default Value
1-4294967295
Configuration Guidelines
None.
A.3.4 BPDU
Description
The BPDU parameter sets whether the service needs transparently transmit the bridge protocol
data unit (BPDU) packets. The BPDU is the information transmitted between bridges. It is used
to switch information between bridges, and then the spanning tree of the network is computed.
Values
Value Range
Default Value
Issue 03 (2012-11-30)
Value
Description
Transparently Transmitted
962
A List of Parameters
Value
Description
Not Transparently
Transmitted
Configuration Guidelines
If the BPDU packets need be transparently transmitted to the opposite end of the network, set
the BPDU to Transparently Transmitted during the service creation.
If the BPDU packets need be processed on the local NE as service packets for computing the
network spanning tree, set the BPDU to Not Transparently Transmitted during the service
creation.
Values
Value Range
Default Value
Unit
46-9000
1500
Byte
Configuration Guidelines
MTU(bytes) should not be less than the maximum length of the user packet payload. Otherwise,
the packets whose length exceeds the service MTU are discarded. Hence, MTU(bytes) should
be set to a value larger than the maximum length of the user packet payload.
963
A List of Parameters
Values
Value Range
Default Value
Value Range
Default Value
VLAN ID
1 to 4094
MAC Address
00-00-00-00-00-01 to FFFF-FF-FF-FF-FE
Egress Interface
Configuration Guidelines
The static MAC address can be set as a unicast address, rather than as a multicast or broadcast
address.
Related Information
When the E-LAN service works in IVL mode, packets are forwarded based on the VLAN and
MAC address.
When the E-LAN service works in SVL mode, packets are forwarded based on the MAC address.
Issue 03 (2012-11-30)
964
A List of Parameters
Values
Value Range
Default Value
Value Range
Default Value
VLAN ID
1-4094
MAC Address
00-00-00-00-00-01 to FFFF-FF-FF-FF-FE
Egress Interface
Description
VLAN ID
MAC Address
Egress Interface
Configuration Guidelines
Configure the parameter according to the service configuration information.
Issue 03 (2012-11-30)
965
A List of Parameters
Related Information
None.
Values
Value Range
Default Value
Unit
1-640
Configuration Guidelines
Set the aging time of MAC addresses according to the user requirements. The minimum time is
one minute.
Related Information
None.
Issue 03 (2012-11-30)
966
A List of Parameters
Values
Value Range
Default Value
BE
Description
BE
By default, the user packets on the V-UNI side are set to BE.
AF1
By default, the user packets on the V-UNI side are set to AF1.
AF2
By default, the user packets on the V-UNI side are set to AF2.
AF3
By default, the user packets on the V-UNI side are set to AF3.
AF4
By default, the user packets on the V-UNI side are set to AF4.
EF
By default, the user packets on the V-UNI side are set to EF.
CS6
By default, the user packets on the V-UNI side are set to CS6.
CS7
By default, the user packets on the V-UNI side are set to CS7.
NONE
The priority of the user packets on the V-UNI side is set according to the
mapping relationship in the DS domain.
Configuration Guidelines
CS7: Indicates the highest forwarding priority, for delivering the control packets (very important
protocol packets) in the network.
CS6: Indicates the priority that is lower than CS7, for delivering the control packets (important
protocol packets) in the network.
Issue 03 (2012-11-30)
967
A List of Parameters
EF: Indicates the expedited forwarding priority that is lower than CS6, for the low delay services
(for example, voice services).
AF4: Indicates the assured forwarding priority 4, whose forwarding priority is lower than EF.
AF3: Indicates the assured forwarding priority 3, whose forwarding priority is lower than AF4.
AF2: Indicates the assured forwarding priority 2, whose forwarding priority is lower than AF3.
AF1: Indicates the assured forwarding priority 1, whose forwarding priority is lower than AF2.
BE: Indicates the best effort forwarding priority that is the lowest forwarding priority, for the
services without QoS in the network.
Values
Value Range
Default Value
Green
Issue 03 (2012-11-30)
Value
Description
Red
Yellow
Green
968
A List of Parameters
Value
Description
None.
Configuration Guidelines
The user packets of a higher priority should be marked green. The user packets of a lower priority
should be marked red. The user packets of a medium priority should be marked yellow.
Values
Split Horizon Group ID
Value Range
Default Value
Configuration Guidelines
None.
Related Information
None.
Issue 03 (2012-11-30)
969
A List of Parameters
Values
Value Range
Default Value
Configuration Guidelines
One service can be configured with only one split horizon group.
Related Information
None.
Values
Value Range
Default Value
V-UNI, V-NNI
970
A List of Parameters
Value
Description
V-UNI
V-NNI
Configuration Guidelines
The logical interface type of the source interface of the VLAN switching table for the E-AGGR
service can be set to V-UNI or V-NNI. The interconnected logical interfaces, however, should
be of the same type.
Related Information
None.
Values
Value Range
Default Value
V-UNI, V-NNI
Issue 03 (2012-11-30)
Value
Description
V-UNI
V-NNI
971
A List of Parameters
Configuration Guidelines
The logical interface type of the sink interface of the VLAN switching table for the E-AGGR
service can be set to V-UNI or V-NNI. The interconnected logical interfaces, however, should
be of the same type.
Related Information
None.
Values
Value Range
Default Value
Unit
125-5000
1000
us
Configuration Guidelines
The value of the packet loading time ranges from 125 to 5000 in steps of 125.
The default bandwidth for a PW that transmits an CES service is 3 Mbit/s. If Packet Loading
Time (us) is set to 125 or 250, and if RTP Header is set to Enable Huawei RTP or Enable a
Standard RTP, the bandwidth for a PW is 4 Mbit/s.
972
A List of Parameters
Before changing the parameter value of a PW, you need to ensure that the PW is not bound with
the service. After the change, you need to bind the PW with the service, and then check whether
the parameter value is changed.
Related Information
If the default value of the packet loading time is 1000 ms/packet, because the rate of E1 signals
is 8000 frames/s, the number of the frames in each packet is: 8000 frames/s * 1000 ms/packet
= 8 frames.
Set Version (V) to 2. Set Padding (P), Header Extension (X), CSRC Count (CC), and Marker
(M) to 0.
Payload type (PT):
l
PT values are allocated to each direction of a PW, and the PT values are in a dynamic value
range. The receive and transmit directions of a PW can share a PT value. Different PWs
can share a PT value.
The PE on the upstream PW puts the allocated PT value in the PT field of the RTP header.
The PE on the upstream PW can detect exceptional packets according to the received PT
value.
The sequence number must be the same as the serial number in the CESoPSN control word.
The timestamp is used for carrying the time information in the network. Two timestamp
generation modes are as follows:
l
Issue 03 (2012-11-30)
Absolute mode: The TDM circuit can restore clock information and upstream PE sets the
timestamp according to the clock. The timestamp is closely related to the serial number.
All equipment supporting CESoPSN must support the absolute mode.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
973
A List of Parameters
The synchronization source identifiers are used for detecting errored connections.
Values
Value Range
Default Value
Disable
Description
Disable
Configuration Guidelines
The default bandwidth for a PW that transmits an CES service is 3 Mbit/s. If Packet Loading
Time (us) is set to 125 or 250, and if RTP Header is set to Enable Huawei RTP or Enable a
Standard RTP, the bandwidth for a PW is 4 Mbit/s.
974
A List of Parameters
If the jitter compensation buffering time is set, the subsequent operations are performed after
the system has received packets for more than a half of the jitter compensation buffering time.
Values
Value Range
Default Value
Unit
125-64000
8000
us
Configuration Guidelines
The value of the jitter compensation buffering time ranges from 125 to 64000 in steps of 125.
Related Information
The principle of delays on the network side and jitters are as follows.
Total delays of CES services include:
l
Assume that the interval between the arrival time of packet 1 and packet 2 is T. Encapsulation
of packet 1 is completed at t(0). After t1, packet 1 is processed and forwarded to the opposite
end. Encapsulation of packet 2 begins at t(0) and is completed after tp. Then, after t2, packet 2
is processed and forwarded to the opposite end. T = (tp + t2) - t1 = tp + (t2 - t1), that is,
transmission time difference between packet 1 and 2 equals the sum of packet loading time and
transmission time of the packets 1 and 2.
The interval between the arrival time of two packets is called jitters on the network side.
The more serious the delay jitter on the network side, the more system resource occupied to
smooth the jitters.
The working principle of the jitter buffer is as follows.
The jitter buffer is in the egress direction (from the network side to the TDM side) of the link.
Packets are transmitted on the network side and received periodically on the TDM side. The
packets received on the network side are burst packets, E1 services received on the TDM side
are constant data streams.
If the jitter compensation buffering time is set, after the system has received packets for a half
of the jitter compensation buffering time, the subsequent operations are performed. The attributes
Issue 03 (2012-11-30)
975
A List of Parameters
of a jitter buffer include depth and delay. When configuring advanced attributes of PW on the
NMS, the jitter compensation buffering time is the depth of a jitter buffer. If the depth of a jitter
buffer is 8000 ms, the subsequent operations are performed after the jitter buffer has received
packets for 4000 ms. Therefore, even the packets from the network side haves jitters, the jitter
buffer stabilizes the downstream data streams.
The jitter compensation buffering time varies with network conditions. If the delays or jitters of
a network are large, the jitter compensation buffering time needs to be set to a greater value, and
therefore, the delay of the link is prolonged. If the network-side performance is stable with a
fixed delay, the size of the jitter buffer can be adjusted to approach the packet loading time to
decrease the delay of the link, In this manner, the delay of the entire link is increased to greater
than maximum interval between the arrival time of packet 1 and packet 2.
Values
Value Range
Default Value
Master Mode
Description
Master Mode
Slave Mode
Configuration Guidelines
None.
Issue 03 (2012-11-30)
976
A List of Parameters
Values
Board Name
Valid Values
Default Value
N1EMS4,
N1EGS4,
N3EGS4,
N4EGS4
Add S-VLAN
Add S-VLAN
Transparently Transmit C-VLAN
Transparently Transmit S-VLAN
Translate C-VLAN
Translate S-VLAN
l For unidirectional services, the options are as
follows:
Add S-VLAN
Add S-VLAN and C-VLAN
Strip C-VLAN
Strip S-VLAN
Strip S-VLAN and C-VLAN
Transparently transmit C-VLAN
Transparently transmit S-VLAN
Translate C-VLAN
Translate S-VLAN
Issue 03 (2012-11-30)
977
A List of Parameters
Board Name
Valid Values
Default Value
N1EAS2
Add S-VLAN
Add S-VLAN
Translate S-VLAN
Transparently Transmit S-VLAN
Transparently Transmit C-VLAN
l For unidirectional services, the options are as
follows:
Add S-VLAN
Strip S-VLAN
Translate S-VLAN
Transparently Transmit S-VLAN
Transparently Transmit C-VLAN
Issue 03 (2012-11-30)
Value
Description
Add S-VLAN
Translate S-VLAN
Translate C-VLAN
Indicates that the C-VLAN and S-VLAN labels are added to the
packets processed in the service.
Strip C-VLAN
Strip S-VLAN
978
A List of Parameters
Configuration Guidelines
Select a proper item according to network planning and service model.
Values
Board Name
Valid Values
Default Value
EPL
EPL
EPL
EPL, EVPL(QinQ)
EPL
Description
EPL
Transit(MPLS)
EVPL(QinQ)
EVPL(MPLS)
Configuration Guidelines
Select a service type as required.
Issue 03 (2012-11-30)
979
A List of Parameters
Values
Value Range
Default Value
MartinioE
Description
MartinioE
Stack VLAN
Configuration Guidelines
The user can choose an encapsulation format according to the requirements of the service.
Different encapsulation formats support different types of data packets. When the encapsulation
format is inconsistent with the type of the receive data packet, the data packet is discarded. When
configuring services, the user needs to make sure that the encapsulation format of the port is
consistent with the type of the data packet that is transmitted by the interconnected equipment.
980
A List of Parameters
Related Information
Figure A-2 Encapsulation format of MartinioE
DA SA
0x8847
Tunnel
VC
Ethernet Data
N
DA SA
0x8100
VLAN
SMAC
S-VLAN
C-VLAN
Data
FCS
6 bytes
6 bytes
4 bytes
4 bytes
4 bytes
Values
Valid Values
Default Value
Empty, 1-4095
Empty
Issue 03 (2012-11-30)
981
A List of Parameters
Value
Description
Empty
1-4095
Configuration Guidelines
Select the value according to the network. Generally, select C-VLAN and S-VLAN allocated
by the carrier.
Values
Value Range
Default Value
1 to 4095
Configuration Guidelines
l
The value range is relevant to the encapsulation format of the P port (Per-NE configuration).
In the case of Martinio E, the value ranges from 16 to 1023. In the case of stack VLAN,
the value ranges from 1 to 4095.
The VLAN IDs at both ends of a link must be the same. In the case of different Ethernet
services, you can set the VLAN ID to different values.
982
A List of Parameters
If no more packets are received from this MAC address, the MAC address is deleted from
the MAC address table.
If more packets are received from this MAC address, reset the aging time of the MAC
address.
The aging time includes two parts: aging time value and aging time unit.
Extremely long aging time results in expiration of the MAC address table in the board. In
this case, the board makes an incorrect decision to filter or forward packets. Consequently,
the forwarding efficiency is affected.
Extremely short aging time results in frequent refreshing of the MAC address table. In this
case, the destination addresses fail to be found in the MAC address table for the large
quantity of received data packets. The board has to broadcast these data packets to all the
ports. Consequently, the forwarding efficiency is greatly affected.
Values
The value range of the aging time value is as follows:
Value Range
Default Value
1-120
Default Value
Min
Configuration Guidelines
The parameter value may affect the forwarding efficiency of the EPLAN or EVPLAN service.
Generally, use the default value.
For example, to set the aging time to 20 days, set the aging time to 20, and then set the aging
time unit to Day.
Issue 03 (2012-11-30)
983
A List of Parameters
Communication is available between a port configured with Spoke and a port configured
with Hub.
Values
Value Range
Default Value
Hub, Spoke
Hub
Configuration Guidelines
Set this parameter according to the isolation domain range of the user.
For example, to make communication available between the headquarters and branch but
unavailable between the branches, Hub is set for the headquarters, and Spoke is set for the
branches. In this case, communication is available between the headquarters and any one of the
branches but unavailable between any two of the branches.
984
A List of Parameters
For the N4EFS0 and N2EFS4, this parameter indicates the number of MAC addresses that
are dynamically learnt from a VB logical port or VLAN filter. The MAC addresses in the
static MAC address table and forbidden MAC addresses are not included.
For the N1EMS4, N1EGS4 and N3EGS4, this parameter indicates the number of MAC
addresses that are dynamically learnt from a VLAN filter. The MAC addresses in the static
MAC address table and forbidden MAC addresses are not included.
For the N2EMR0 and N2EGR2, this parameter indicates the number of MAC addresses
that are dynamically learnt from a VB logical port. The MAC addresses in the static MAC
address table and forbidden MAC addresses are not included.
For the N1EAS2, this parameter indicates the number of dynamic MAC addresses that are
queried in the VLAN filter.
Values
For example, for the N1EMS4, N1EGS4 and N3EGS4, if the port dynamically learns five MAC
addresses, the actual capacity of the MAC address is five.
Configuration Guidelines
None.
Issue 03 (2012-11-30)
985
A List of Parameters
Values
Board Name
Value Range
Default Value
Description
N4EFS0, N2EFS4,
N1EFS0A,
N1EMS2, N3EFS4,
N3EGS2, N5EFS0
l No Limit
No Limit
No Limit indicates
that the MAC
address capacity is
16384.
l 0-16384
l No Limit
No Limit
l 0-129535
No Limit indicates
that the MAC
address capacity is
129535.
0-129535 indicates
the allowable size of
the MAC address
table, which
specifies MAC
Address Table
Capacity of the
VLAN filter.
N2EMR0, N2EGR2
l No Limit
No Limit
l 0-65535
No Limit indicates
that the MAC
address capacity is
65535.
0-65535 indicates the
allowable size of the
MAC address table,
which specifies
MAC Address
Table Capacity of
the VB logical port.
Configuration Guidelines
The user can specify this parameter for the VB logical port or VLAN filter as required.
986
A List of Parameters
Values
Value Range
Default Value
IVL, SVL
Description
SVL
IVL
Configuration Guidelines
The user can set the parameter according to the networking requirements.
987
A List of Parameters
Values
Value Range
Default Value
First Page
Description
First Page
Previous Page
Indicates querying the page that precedes the page where the
last query ends.
Next Page
Indicates querying the page that follows this page where the last
query ends.
Configuration Guidelines
Normally the first page is the one where the first batch query starts, and the one where the second
query starts is the next page.
A.5.12 Tunnel
Description
Tunnel indicates the tunnel ID of this node. a tunnel ID is specified for every new EVPL (MPLS)
or Transit (MPLS) service.
Issue 03 (2012-11-30)
988
A List of Parameters
When the port encapsulation format is MartinioE, the Tunnel flag is the outer flag
encapsulated in MPLS format that identifies a service with the VC flag.
When the encapsulation format of the port is Stack VLAN, the Tunnel flag is the outer
flag encapsulated in VMAN format.
Values
Board Name
Value Range
Default Value
l 16-1023 (MartinioE)
-
l 16-1048575
(MartinioE)
Configuration Guidelines
The user can set this parameter within the value range as required. The Tunnel value of different
services at the same node, however, cannot be the same.
A.5.13 VC
Description
VC indicates the VC number of the node. A VC number is specified for every new EVPL (MPLS)
service.
l
When the port encapsulation format is MartinioE, the VC flag is the inner flag of the MPLS
encapsulation format and identifies a service with the tunnel flag.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
16-1023
989
A List of Parameters
Configuration Guidelines
You can set this parameter to any value within the value range according to the requirements of
the user. The VC values of different services at the same node, however, cannot be the same.
Values
Value Range
Default Value
Add S-VLAN base for Port, Add SVLAN base for Port and C-VLAN,
Mount Port, Mount Port and base for
Port and S-VLAN
Issue 03 (2012-11-30)
Value
Description
Mount Port
Indicates the mounting based on port+SVLAN. In this case, the S-VLAN needs to be
specified.
990
A List of Parameters
Configuration Guidelines
Set this parameter according to the actual requirement of the user.
Values
Value Range
Default Value
802.1q
Issue 03 (2012-11-30)
Value
Description
802.1d
802.1q
991
A List of Parameters
Value
Description
802.1ad
The 802.1ad bridge supports the pure bridge and virtual bridge.
That is, the 802.1ad bridge forwards data based on VLAN+MAC
or the MAC address. The application of the 802.1ad bridge can
realize the service switch between the carrier and its users and
can isolate services of different users. In addition, the 802.1ad
bridge can identify the voice service and data service, which
improves the quality of the voice service.
Configuration Guidelines
Set this parameter according to the actual service condition of the user.
Values
Valid Values
Default Value
Invalid service
NOTE
For the FCION, ESCON, DVB-ASI SD-SDI, and HD-SDI services,
each service maps a rate. For the FC service, select FC100 or
FC200.
Issue 03 (2012-11-30)
Value
Description
FCION service
ESCON service
DVB-ASI service
SD-SDI service
Rate: 270Mbit/s
HD-SDI service
Rate: 1.485Gbit/s
992
A List of Parameters
Value
Description
FC service
Configuration Guidelines
Select a proper service type as required.
Values
Valid Value
Default Value
Configuration Guidelines
l
Issue 03 (2012-11-30)
993
A List of Parameters
If Enabled Flow Control of FC Port is set to Disabled, the service rate in the link is
affected when the FC100 or FC200 service is transmitted over a long distance. For example,
if the CREDIT value is 20 for the storage equipment, and if the storage equipment is more
than 50 km away from the SAN board, the service rate is greatly decreased. When the
distance is between 20 km (for the FC200 service) or 40 km (for the FC100 service), the
buffer of the storage equipment is used up.
If Enabled Flow Control of FC Port is set to Enabled, the service rate in the link is not
affected when the FC100 or FC200 service is transmitted in a long distance.
If the settings of the equipment are inconsistent at the two ends, the flow control function
is disabled, and the service is also unavailable.
Values
Valid Values
Default Value
Enabled, Disabled
Enabled
Description
Enabled
Disabled
Configuration Guidelines
The Enabled Flow Control of FC Port parameter must be consistent with each other for the
interconnected equipment at the two ends.
Issue 03 (2012-11-30)
994
A List of Parameters
If the parameter value is less than 5, the service rate in the link is low.
If the parameter value is not less than 5, the service rate in the link is close to the line speed.
The FC100 service is transmitted in the link and the transmission distance is 10 km at the client
side.
l
If the parameter value is less than 10, the service rate in the link is low.
If the parameter value is not less than 10, the service rate in the link is close to the line
speed.
NOTE
If the credit value is large, the system is not affected. Generally, set it to the maximum value.
Values
Valid Values
Default Value
0-20
20
Configuration Guidelines
1.
The credit values at the client side should be consistent with each other for the
interconnected equipment at the two ends. In this case, the uplink rate is consistent with
the downlink rate.
2.
This parameter is decided by the transmission distance at the client side. For the FC100
service, the value 0.5 maps the transmission distance of 1 km. For the FC200 service, the
value 1 maps the transmission distance of 1 km.
3.
Issue 03 (2012-11-30)
995
A List of Parameters
If the parameter value is less than 50, the service rate in the link is low.
If the parameter value is not less than 50, the service rate in the link is close to the line
speed.
The FC200 service is transmitted in the link and the transmission distance is 100 km at the client
side.
l
If the parameter value is less than 100, the service rate in the link is low.
If the parameter value is not less than 100, the service rate in the link is close to the line
speed.
NOTE
If the credit value is large, the system is not affected. Generally, set it to the maximum value.
Values
Valid Values
Default Value
0-1500
1500
Configuration Guidelines
1.
The credit values at the WAN side need to be consistent with each other for the
interconnected equipment at the two ends. In this case, the uplink rate is consistent with
the downlink rate.
2.
This parameter is decided by the transmission distance at the WAN side. For the FC100
service, the value 0.5 maps the transmission distance of 1 km. For the FC200 service, the
value 1 maps the transmission distance of 1 km.
3.
996
A List of Parameters
Values
Value Range
Default Value
1s
Configuration Guidelines
It is recommended that you use three period values, that is, 3.33 ms for protection switching,
100 ms for performance check, and 1s for connectivity check. The configuration should comply
with user requirements. If the fast check is required, set to 3.33 ms. Hence, the fault can be
detected quickly. The bandwidth used, however, descends with the period value.
997
A List of Parameters
Values
Value Range
Default Value
0-7
The following table lists descriptions of each value. You can also define the maintenance scope
for an MD level as required.
Value
Description
Operator
Operator
Operator
Service Provider
Service Provider
Customer
Customer
Customer
Configuration Guidelines
"0" indicates the lowest MD level and "7" the highest MD level. The parameter level defines
the maintenance scope of the OAM operations.
Related Information
Before setting this parameter, obtain the information about the MD.
998
A List of Parameters
Values
Value Range
Default Value
Active, Inactive
Active
Configuration Guidelines
If the check is needed, select Active. Otherwise, select Inactive.
Values
Value Range
Default Value
Configuration Guidelines
None.
999
A List of Parameters
Values
Value Range
Default Value
Description
E-Line
E-LAN
E-AGGR
Configuration Guidelines
None.
Related Information
None.
1000
A List of Parameters
Values
Value Range
Default Value
Active, Inactive
Active
Description
Active
Inactive
Indicates that the NE where the MEP is does not send any CC
packet and the NE where the RMEP is does not check for an
alarm.
Configuration Guidelines
The activation status is "Active" after the MEP is created on the NE and the service connectivity
can be checked. The activation status can also be set to "Inactive".
Related Information
Before setting this parameter, obtain the information about the MD, MA, MEP and RMEP.
Issue 03 (2012-11-30)
1001
A List of Parameters
Values
Value Range
Default Value
1-255
Configuration Guidelines
The time taken ascends with the number of transmitted packets.
Related Information
None.
Values
Value Range
Default Value
64-1400
64
Configuration Guidelines
The default value is 64. Packets of different lengths may have different connectivity test results.
Related Information
None.
Issue 03 (2012-11-30)
1002
A List of Parameters
Values
Value Range
Default Value
0-7
Configuration Guidelines
Value 0 indicates the lowest priority, and value 7 indicates the highest priority. By default, value
7, the highest priority, is used.
Related Information
None.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
1003
A List of Parameters
Configuration Guidelines
Indicates the MAC address of the port where the destination MP is located.
Related Information
None.
Values
Value Range
Default Value
Configuration Guidelines
Indicates the MAC address of the port where the response MP is located.
Related Information
None.
1004
A List of Parameters
the packets pass through each maintenance intermediate point (MIP). For example, the packets
pass through one MIP and reach the response MP, the returned hop count is 1. The maximum
value of the Hop Count parameter is 64. If the packets pass through 64 MPs and fail to reach
the response MP, the OAM packets are discarded. In this case, value "/" is returned.
Values
Value Range
Default Value
1-64, /
Configuration Guidelines
The Hop Count parameter cannot be set. When the loopback test (LT) is complete, a value is
returned. When the OAM packets pass through 64 MPs and fail to reach the response MP, the
"/" value is returned.
Related Information
None.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
Character string
1005
A List of Parameters
Configuration Guidelines
None.
Values
Valid Values
Default Value
00-00-0001 to FF-FF-FF00
00-00-0001
Configuration Guidelines
The maintenance point ID must be unique in the entire network. Moreover, the U2000 can check
whether the maintenance point ID is duplicate.
1006
A List of Parameters
Values
Valid Values
Default Value
MEP, MIP
MEP
Configuration Guidelines
None.
Values
Valid Values
Default Value
Activate, Inactivate
Inactivate
Configuration Guidelines
To start the connectivity check, activate the CC function at a maintenance endpoint.
To stop the connectivity check, deactivate the CC function.
1007
A List of Parameters
Values
Value Range
Default Value
Deactivate
Description
Deactivate
Source activate
Sink activate
Configuration Guidelines
Set this parameter according to the actual requirement of the user.
1008
A List of Parameters
Values
Valid Values
Default Value
Succeeded, Failed
Configuration Guidelines
None.
Values
Valid Values
Default Value
Unknown
Configuration Guidelines
None.
1009
A List of Parameters
As shown in Figure A-4, MEP1 and MEP2 are the maintenance endpoints. MIP1, MIP2, MIP3
and MIP4 are the maintenance intermediate points. In this case, the number of hops from MEP1
to MEP2 is 5, and that from MEP1 to MIP3 is 3.
Figure A-4 An example of number of hops
Values
If the value of Hop Count is 2, there are two hops.
Configuration Guidelines
None.
Values
Value Range
Default Value
Burst Mode
1010
A List of Parameters
Value
Description
Burst Mode
Continue Mode
Configuration Guidelines
To check whether the link communication is available, set this parameter to Burst Mode to
transmit several ping packets at one time. To locate the fault on the link, set this parameter to
Continue Mode to continuously transmit ping packets.
Values
Valid Values
Default Value
Unit
64
Byte
Configuration Guidelines
Set this parameter according to the expected frame length.
1011
A List of Parameters
Values
Valid Values
Default Value
Unit
Second
Configuration Guidelines
Set this parameter to a lower value if the requirement is high for the response time.
Set this parameter to a higher value if the requirement is low for the response time.
Values
Valid Values
Default Value
Unit
Second
Configuration Guidelines
None.
Issue 03 (2012-11-30)
1012
A List of Parameters
A.7.12 Delay
Description
The Delay parameter specifies the hold-off period from the time when packets are transmitted
to the time when packets are received in each OAM Ping or performance test.
Values
Valid Values
Default Value
Unit
0-Timeout time
ms
Configuration Guidelines
None.
Values
In each Ping test, set the number of Ping attempts to 4. The actual test results are as follows:
For the first Ping attempt, the delay is 100 ms.
For the second Ping attempt, the delay is 200 ms.
For the third Ping attempt, the delay is 300 ms.
The forth Ping attempt fails because of timeout.
In this case, the average delay is as follows:
Average delay = (100 + 200 + 300) / 3 = 200 ms
Issue 03 (2012-11-30)
1013
A List of Parameters
Configuration Guidelines
None.
Values
Valid Values
Default Value
Unit
0-Timeout time
ms
Configuration Guidelines
None.
Values
Issue 03 (2012-11-30)
Valid Values
Default Value
Unit
0-Timeout time
ms
1014
A List of Parameters
Configuration Guidelines
None.
Values
Valid Values
Default Value
Unit
Attempt
Configuration Guidelines
Set this parameter to a proper value according to the test accuracy and the system resource used
in the test.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
SDH Direction, /
1015
A List of Parameters
Description
SDH Direction, /
Values
Valid Values
Default Value
Unit
1000
ms
Configuration Guidelines
Set this parameter according to the actual port rate and the monitoring period.
Make sure that the value of Error Frame Monitor Threshold is not greater than the maximum
number of frames received at the port within the time specified in Error Frame Monitor
Window (ms).
1016
A List of Parameters
Values
Valid Values
Default Value
Unit
Frame
Configuration Guidelines
If higher link performance is required, set the threshold to a lower value. Otherwise, set the
threshold to a higher value.
Make sure that the value of Error Frame Monitor Threshold is not greater than the maximum
number of frames received at the port within the time specified in Error Frame Monitor
Window (ms).
Issue 03 (2012-11-30)
1017
A List of Parameters
Values
Valid Values
Default Value
Unit
Maxpps
Frame
Configuration Guidelines
Set this parameter according to the actual data frame transmission rate and the frames. If the
data transmission rate is high, set this parameter to a higher value. Otherwise, set this parameter
to a lower value.
Related Information
Maxpps: indicates the maximum number of frames per second.
Specifically,
l
According to the rule of Maxpps/10 < Error Frame Period Window < Maxpps*60, you know
the value range of the Error Frame Period Window (Frame) parameter for a certain port rate.
Values
Issue 03 (2012-11-30)
Valid Values
Default Value
Unit
Frame
1018
A List of Parameters
Configuration Guidelines
None.
Values
Valid Values
Default Value
Unit
60
Second
Configuration Guidelines
Set this parameter according to the monitoring time period.
Make sure that the value of Error Frame Second Window (s) is not less than that of Error
Frame Second Threshold (s).
1019
A List of Parameters
Values
Valid Values
Default Value
Unit
Second
Configuration Guidelines
None.
Values
Valid Values
Default Value
Enabled, Disabled
Disabled
Configuration Guidelines
None.
Issue 03 (2012-11-30)
1020
A List of Parameters
Values
Valid Values
Default Value
Active, Passive
Active
Value
Description
Active
Passive
Configuration Guidelines
None.
1021
A List of Parameters
Values
Valid Values
Default Value
Enabled, Disabled
Enabled
Configuration Guidelines
None.
Values
Valid Values
Default Value
Enabled, Disabled
Disabled
Configuration Guidelines
If the hardware has the capability of performing unidirectional operations and supports
unidirectional software operations, generally, set Unidirectional Operation to Enabled.
1022
A List of Parameters
Values
Valid Values
Default Value
Non-Loopback
Description
Non-Loopback
Configuration Guidelines
None.
Issue 03 (2012-11-30)
1023
A List of Parameters
Values
Value Range
Default Value
Description
No use
Used First
Configuration Guidelines
None.
Related Information
Figure A-5 CW structure of CESoPSN
Issue 03 (2012-11-30)
1024
A List of Parameters
Compared with the CESoPSN, the M bit is changed into the RSV bit and the RSV bit is set to
the value 0 in the SAToP.
Values
Value Range
Default Value
None, CW
CW
Description
None
CW
Configuration Guidelines
None.
1025
A List of Parameters
Values
Value Range
Default Value
None, Ping
Ping
Description
None
Ping
Configuration Guidelines
None.
1026
A List of Parameters
Values
Value Range
Default Value
1-4095, Non-specified
Non-specified
Configuration Guidelines
When the Request VLAN parameter is set to Non-specified, the packets with tags are transmitted
transparently, and the packets without tags are added with 0 VLAN tags.
Values
Value Range
Default Value
Up, Down
Up
Issue 03 (2012-11-30)
Value
Description
Up
Down
1027
A List of Parameters
Configuration Guidelines
Dynamic supports Down and Static supports Up.
Values
Value Range
Default Value
Enabled, Disabled
Disabled
Description
Enabled
Disabled
Configuration Guidelines
When checking whether a fault occurs on a service PW, set this parameter to Enabled.
Related Information
None.
Issue 03 (2012-11-30)
1028
A List of Parameters
Values
Value Range
Default Value
1-31
10
Configuration Guidelines
If Max.Concatenated Cell Count is set to 1, the concatenation function is disabled.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
1+1, 1:1
1+1
1029
A List of Parameters
Description
1+1
1:1
Configuration Guidelines
When you create a protection group in 1:1 protection mode, the protection channel cannot be
configured with extra services. For this reason, the actual applications of 1+1 and 1:1 protection
groups are the same, except for the configuration.
Values
Value Range
Default Value
No Protection
Issue 03 (2012-11-30)
Value
Description
PW APS
1030
A List of Parameters
Value
Description
Unprotected
Configuration Guidelines
Set this parameter according to the service network. For example:
l
If the working and protection PWs that transmit the existing services on an NE have the
same source and sink, set this parameter to Slave Protection Pair.
Related Information
None.
Issue 03 (2012-11-30)
1031
A List of Parameters
Values
Value Range
Default Value
Issue 03 (2012-11-30)
Value
Description
Working in working
tunnel
Working in protection
tunnel
Services are transmitted on the working tunnel all the time, and the
protection tunnel is in standby state.
Forcibly switching
status and working in
the protection tunnel
Invalid protection
tunnel\(because the
OAM generates an SF
alarm)
1032
Issue 03 (2012-11-30)
A List of Parameters
Value
Description
Invalid protection
tunnel\(because the
OAM generates an SD
alarm)
1033
A List of Parameters
Value
Description
Configuration Guidelines
Protection switching is triggered when any of the following conditions is met: SF, external
switching commands, and WTR expiry.
Values
Value Range
Default Value
Disabled, Enabled
Disabled
Description
Disabled
Enabled
Configuration Guidelines
If the APS protocol of the local NE is Enabled before the APS protocol of the opposite NE is
Enabled, an exception may occur when the opposite NE receives the services. It is recommended
that the APS protocol be enabled after the MPLS APS protection group is configured for both
ends.
When you configure the MPLS APS protection, the default protocol status is Disabled.
Issue 03 (2012-11-30)
1034
A List of Parameters
A.9.5 PW Type
Description
The PW Type parameter is used for identifying the type of a service transmitted over a PW. A
PW can transmit multiple types of services.
Values
Value Range
Default Value
Ethernet
Issue 03 (2012-11-30)
Value
Description
Ethernet
Ethernet Tag
Mode
SATop
CESoPSN
1035
A List of Parameters
Configuration Guidelines
When creating a PW, select the PW type based on the type of the service bound with the PW.
Values
Set the traffic classification rule in the following formats:
[Match Type : Match Value : Wilcard]
And Logical Relation Between Matched Rules (the matched packets should meet all the traffic
classification rules)
[Match Type : Match Value : Wilcard] & [Match Type : Match Value : Wilcard] &...& [Match
Type : Match Value : Wilcard]
Or Logical Relation Between Matched Rules: (the matched packets should meet one of the
traffic classification rule)
[Match Type : Match Value : Wilcard] | [Match Type : Match Value : Wilcard] |...| [Match Type :
Match Value : Wilcard]
Configuration Guidelines
The traffic classification rules are applicable only to the port policy or V-UNI ingress policy.
The character string of the match rule (match type, match value, and wilcard included) can
contain a maximum of 128 bytes.
1036
A List of Parameters
Related Information
Currently, the equipment cannot identify the IPv6 packets.
For details on how to set this parameter, see the description of the Match Type, Match
Value, Wilcard, and Logical Relation Between Matched Rules parameters.
Values
Value Range
Default Value
Description
Source IP
Destination IP
Source MAC
Address
Issue 03 (2012-11-30)
1037
Value
Description
Destination MAC
Address
A List of Parameters
The traffic classification rule that the destination MAC address matches
is in the format of [Destination MAC Address : Destination MAC
Address Value : Wilcard].
For example, [Destination MAC Address : 00-e0-fc-54-ab-59 :
00-00-00-00-00-00].
The protocol type is matched.
Protocol Type
The traffic classification rule that the protocol type matches is in the
format of [Protocol Type : Protocol Type Value].
For example, [Protocol Type : icmp].
Source Port
The source port is matched (available when the protocol type is TCP and
UDP).
The traffic classification rule that the source port matches is in the format
of [Source Port : Source Port Value : Wilcard].
For example, [Source Port : 23 : 23].
Destination Port
The destination port is matched (available when the protocol type is TCP
and UDP).
The traffic classification rule that the destination port matches is in the
format of [Destination Port : Destination Port Value : Wilcard].
For example, [Destination Port : 80 : 80].
ICMP Packet
Type
The ICMP packet type is matched (available when the protocol type is
ICMP).
The traffic classification type that the ICMP packet type matches is in
the format of [ICMP Packet Type : ICMP Packet Type Value].
For example, [ICMP Packet Type : echo].
DSCP Value
IP-Precedence
Value
CVlan ID
Issue 03 (2012-11-30)
1038
Value
Description
CVlan priority
A List of Parameters
The traffic classification rule that the CVlan priority matches is in the
format of [CVlan priority : CVlan priority Value : Wilcard].
For example, [CVlan priority : 4 : 6].
SVlan ID
SVlan priority
DEI
Configuration Guidelines
To process different service packets accordingly (make ACL for packets, apply different
scheduling priorities or discard polices), perform traffic classification for the packets according
to the varied feature values of packets. The feature value that can distinguish the packets
according to requirements is adopted to classify the packets.
For example, user A and user B access to a port. The network should provide services of different
QoS for the two users. Hence, the packets of user A and user B should be distinguished at the
port.
The analysis shows the following:
In the case of the service packets of user A, the prefix of the source IP address is 192.168.1.0
and the subnet mask is 255.255.255.0.
In the case of the service packets of user B, the prefix of the source IP address is 192.168.2.0
and the subnet mask is 255.255.255.0.
The packets of user A can be distinguished from the packets of user B according to the source
IP address. Hence, two traffic classification rules should be set at the port.
[Source IP : 192.168.1.0 : 0.0.0.255]
[Source IP : 192.168.2.0 : 0.0.0.255]
1039
A List of Parameters
Related Information
For the setting of this parameter, see the description of the Traffic Classification Rule, Match
Value, Wilcard, and Logical Relation Between Matched Rules parameters.
Values
Value Range
Default Value
Description
Source IP
Destination IP
Indicates the source MAC address value when the source MAC
address matches the traffic classification rule.
For example, 00-0f-ef-54-aa-00
Destination MAC
Address
Issue 03 (2012-11-30)
1040
A List of Parameters
Value
Description
Protocol Type
Indicates the protocol type value when the protocol type matches the
traffic classification rule.
Protocol type range: tcp, udp, icmp, igmp
Source Port
Indicates the source port value when the source port (available when
the protocol type is TCP and UDP) matches the traffic classification
rule.
The source port value ranges from 0 to 65535.
Destination Port
Indicates the ICMP packet type value when the ICMP packet type
(available whether protocol type is ICMP) matches the traffic
classification rule.
Value range of ICMP packet type:
echo
echo-reply
fragmentneed-DFset
host-redirect
host-tos-redirect
host-unreachable
information-reply
information-request
net-redirect
net-tos-redirect
net-unreachable
parameter-problem
port-unreachable
protocol-unreachable
reassembly-timeout
source-quench
source-route-failed
timestamp-reply
timestamp-request
ttl-exceeded
DSCP Value
Indicates the value of DSCP when the DSCP value matches the
traffic classification value.
The DSCP value ranges from 0 to 63.
Issue 03 (2012-11-30)
1041
A List of Parameters
Value
Description
IP-Precedence Value
CVlan ID
CVlan priority
SVlan ID
SVlan priority
DEI
Indicates the value of DEI when the DEI matches the traffic
classification value.
The DEI value can be 0 or 1.
Configuration Guidelines
The match value of each match type should be within the valid range.
Related Information
For details on how to set this parameter, see the description of the Traffic Classification
Rule, Match Type, Wilcard, and Logical Relation Between Matched Rules parameters.
1042
A List of Parameters
When the wildcard is set to all "0"s, it indicates that the packets should strictly match the match
value.
NOTE
The digits of the wildcard, user packet match type value, and match type value indicate the digits after the
values of the wildcard, user packet match type value, and match type are converted to the binary format.
If the Match Type is Source IP, the match value is 192.168.1.100. In this case, if the
wildcard is 0.0.0.255, it indicates that all the packets whose source IP address starting from
192.168.1. comply with the flow classification rule.
If the Match Type is CVLAN Priority, the match value is 7 (the corresponding binary
value is 111). In this case, if the wildcard is 6 (the corresponding binary value is 110), it
indicates that the packets whose CVLAN priorities are 7, 5, 3, and 1 (the corresponding
binary values are 111, 101, 011, and 001) comply with the flow classification rule.
Values
Value Range
Default Value
Description
Source IP
Destination IP
Destination MAC
Address
Protocol Type
Issue 03 (2012-11-30)
Indicates no wildcard.
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
1043
A List of Parameters
Match Type
Description
Source Port
Destination Port
Indicates no wildcard.
DSCP Value
IP-Precedence Value
CVLAN ID
CVLAN Priority
SVLAN ID
SVLAN Priority
DEI
Indicates no wildcard.
Configuration Guidelines
The wildcard value of each match type should be within its own valid range. When the wildcard
is set to all "0"s, it indicates that the packets should strictly match the match value.
Related Information
For details on how to set this parameter, see the description of the Traffic Classification
Rule, Match Type, Match Value, and Logical Relation Between Matched Rules parameters.
Issue 03 (2012-11-30)
1044
A List of Parameters
Values
Value Range
Default Value
Bidirectional, Unidirectional
Description
Bidirectional
Unidirectional
Configuration Guidelines
If the PW need to process packets in two directions, that is, the directions of entering and exiting
the network, the value should be set to bidirectional. Otherwise, the value should be set to
unidirectional. Generally, in the case of the broadcast PW, the value should be set to
unidirectional. In the case of unicast PW, the value should be set to bidirectional.
Issue 03 (2012-11-30)
1045
A List of Parameters
Values
Value Range
Default Value
Ingress, Egress
Description
Ingress
Egress
Configuration Guidelines
None.
Values
The Duplicated Policy Name parameter should contain letters or numbers or both with a
maximum length of 64 characters. The characters \ or / are not contained.
Configuration Guidelines
For changing the current policy, you can directly duplicate the existed policy that satisfies the
requirement to the current policy. In this case, no extra setting is required for the current policy.
1046
A List of Parameters
Values
Policy Type
Value Range
Port policy
0-100
0-2000
0-2000
PW policy
0-2000
QinQ policy
0-2000
0-256
0-7
0-127
Configuration Guidelines
The IDs of policies of different types can be repeated, whereas the IDs of policies of the same
type must be unique.
Values
The policy name should contain letters or numbers or both with a maximum length of 64
characters. The characters \ or / are not supported.
Configuration Guidelines
Different policies can have the same policy name.
Issue 03 (2012-11-30)
1047
A List of Parameters
Values
Value Range
Default Value
1-4294967295
Configuration Guidelines
The QinQ Link ID parameter identifies a unique QinQ link. Hence, do not set the same ID for
different QinQ links.
Values
Slot number - board name - port number (PORT - number).
Configuration Guidelines
None.
1048
A List of Parameters
Values
Value Range
Default Value
1-4094
Configuration Guidelines
When creating the QinQ, you should define the port and S-LVAN ID. The port and S-LAN ID
cannot be occupied by other services. Moreover, the S-LVAN ID must be set within the valid
range.
Values
Value Range
Default Value
Enabled, Disabled
Disabled
Issue 03 (2012-11-30)
Value
Description
Disabled
1049
A List of Parameters
Value
Description
Enabled
Configuration Guidelines
This parameter is set according to service requirements.
For example, packets on multiple V-UNIs that use a certain V-UNI Ingress Policy match Flow
Classification of this policy. In addition, the service packets of these flows have the same service
features, such as the same destination IP address. In this case, if these service packets need share
the CIR bandwidth set in Flow Classification of this policy, set Traffic Classification
Bandwidth Sharing to Enabled.
Related Information
For the setting of this parameter, see Traffic Classification Rule.
Values
Value Range
Default Value
Color-Blind, Chromatic-Sensitive
Color-Blind
Issue 03 (2012-11-30)
1050
A List of Parameters
Value
Description
Color-Blind
Chromatic-Sensitive
Configuration Guidelines
After the upstream DS domain marks the service packets accessed into the local DS domain, on
the ingress node of the local DS domain, if the coloration result of the upstream DS domain need
be considered, Chromatic-Sensitive is applicable to the Traffic Classification that matches
the service packets. Otherwise, Color-Blind is applicable.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
And, Or
And
1051
A List of Parameters
Description
And
The packet matches the flow only when the packet matches
each traffic classification rule.
Or
The packet matches the flow when the packet matches one of
the traffic classification rules.
Configuration Guidelines
None.
Values
Value Range
Default Value
Issue 03 (2012-11-30)
Value
Description
Pass
Discard
Remark
1052
A List of Parameters
Configuration Guidelines
When a network congestion occurs or the color of the packet needs adjusting, packets of different
colors can be configured with different processing modes.
Values
Value Range
Default Value
Unit
1-100
25
Configuration Guidelines
The bigger the value of the AF1 scheduling weight, the higher is scheduling priority of the AF1
queue. In this case, when the bandwidth is insufficient, the packet loss ratio is smaller. Reversely,
the smaller the value of the AF1 scheduling weight, the lower is the scheduling priority of the
AF1 queue. In this case, when the bandwidth is insufficient, the packet loss ratio is bigger.
1053
A List of Parameters
Values
Value Range
Default Value
Unit
1-100
25
Configuration Guidelines
The bigger the value of the AF2 scheduling weight, the higher is the scheduling priority of the
AF2 queue. In this case, when the bandwidth is insufficient, the packet loss ratio is smaller.
Reversely, the smaller the value of the AF2 scheduling weight, the lower is the scheduling
priority of the AF2 queue. In this case, when the bandwidth is insufficient, the packet loss ratio
is bigger.
Values
Value Range
Default Value
Unit
1-100
25
Configuration Guidelines
The bigger the value of the AF3 scheduling weight, the higher is the scheduling priority of the
AF3 queue. In this case, when the bandwidth is insufficient, the packet loss ratio is smaller.
Reversely, the smaller the value of the AF3 scheduling weight, the lower is the scheduling
priority of the AF3 queue. In this case, when the bandwidth is insufficient, the packet loss ratio
is bigger.
1054
A List of Parameters
Values
Value Range
Default Value
Unit
1-100
25
Configuration Guidelines
The bigger the value of the AF4 scheduling weight, the higher is the scheduling priority of the
AF4 queue. In this case, when the bandwidth is insufficient, the packet loss ratio is smaller.
Reversely, the smaller the value of the AF4 scheduling weight, the lower is the scheduling
priority of the AF4 queue. In this case, when the bandwidth is insufficient, the packet loss ratio
is bigger.
1055
A List of Parameters
Values
Value Range
Default Value
Unit
0-4095
l Red: 1024
256 bytes
l Yellow: 1536
l Green: 2048
Configuration Guidelines
When the value of Discard Lower Threshold is smaller, the length of a queue is shorter. The
value of Discard Lower Threshold must not be more than the value of Discard Upper
Threshold.
Values
Value Range
Default Value
Unit
0-4095
l Red: 3072
256 bytes
l Yellow: 3584
l Green: 4095
Configuration Guidelines
The value of Discard Upper Threshold cannot be less than the value of Discard Lower
Threshold.
Issue 03 (2012-11-30)
1056
A List of Parameters
Values
Value Range
Default Value
Unit
1-100
100
Configuration Guidelines
The maximum discard probability can be 100%, which indicates that all the packets in a queue
are discarded when the length of the queue is more than the specified value of Discard Upper
Threshold.
Issue 03 (2012-11-30)
1057
A List of Parameters
Values
Value Range
Default Value
Configuration Guidelines
When the equipment in different DS domains is interconnected, the mapping relationship in the
egress direction needs to be configured so that the CoS information can be mapped into the
priority bit of the packet.
Values
Value Range
Default Value
1058
A List of Parameters
Value
Description
cvlan
svlan
ip-dscp
mpls-exp
Configuration Guidelines
The value setting depends on the packet type.
Values
Value Range
Default Value
Unit
1024-10000000,
Unlimited
4294967295
(FFFFFFFFFF is invalid)
kbit/s
Configuration Guidelines
The greater CIR, the higher rate of the traffic, and the more packets forwarded.
It is recommended that the rate of the packets is not more than the CIR.
1059
A List of Parameters
If the policy is applied to function points, such as PW, port, VUNI, and QINQ, you need to
ensure that the sum of the CIRs in the policies applied to the function point is not more than the
CIR of the function point.
Values
Value Range
Default Value
Unit
64-10000000
4294967295
(FFFFFFFFFF is invalid)
byte
Configuration Guidelines
If the CBS is small, the buffer easily overflows and some packets are discarded when the
bandwidth is insufficient.
The greater the CBS is, the more packets can be buffered when the bandwidth is insufficient,
and the less the packet loss ratio is. The greater the CBS, the more serious the delay jitter when
packets are forwarded.
For the OptiX OSN equipment, the CBS is reserved and cannot be set.
1060
A List of Parameters
Values
Value Range
Default Value
Unit
64-10000000
4294967295
(FFFFFFFFFF is invalid)
(kbit/s)
Configuration Guidelines
It is recommended that the PIR be not less than the CIR.
For the OptiX OSN equipment, the CBS is reserved and cannot be set.
Values
Value Range
Default Value
Unit
64-10000000
4294967295
(FFFFFFFFFF is invalid)
byte
Configuration Guidelines
Although the packets in the PBS buffer may also fail to be forwarded, the PBS buffer decreases
the packet loss ratio.
The greater the PBS, the less the packet loss ratio, and the more serious the delay jitter when
packets are forwarded.
For the OptiX OSN equipment, the CBS is reserved and cannot be set.
Issue 03 (2012-11-30)
1061
A List of Parameters
A.10.30 EXP
Description
The EXP parameter specifies the field in the MPLS packets for identifying the priority of these
MPLS packets.
E-LSP is used to set the EXP. 77 is the highest priority.
Values
Value Range
Default Value
0, 1, 2, 3, 4, 5, 6, 7, None
None
Issue 03 (2012-11-30)
Value
Description
The values 0-7 correspond to the eight levels of the CoS policy.
The value 0 corresponds to BE.
The values 0-7 correspond to the eight levels of the CoS policy.
The value 1 corresponds to AF1.
The values 0-7 correspond to the eight levels of the CoS policy.
The value 2 corresponds to AF2.
The values 0-7 correspond to the eight levels of the CoS policy.
The value 3 corresponds to AF3.
The values 0-7 correspond to the eight levels of the CoS policy.
The value 4 corresponds to AF4.
The values 0-7 correspond to the eight levels of the CoS policy.
The value 5 corresponds to EF.
The values 0-7 correspond to the eight levels of the CoS policy.
The value 6 corresponds to CS6.
The values 0-7 correspond to the eight levels of the CoS policy.
The value 7 corresponds to CS7.
1062
A List of Parameters
Configuration Guidelines
The higher the value of the EXP parameter, the higher the priority of the packets.
Values
Value Range
Default Value
Pipe, Uniform
Uniform
Description
Pipe
Indicates that the CoS policy of the packets need not to be restored
when the tunnel labels are peeled off.
Uniform
Configuration Guidelines
None.
1063
A List of Parameters
Values
Board Name
Value Range
Default Value
N1EMS4, N1EGS4,
N3EGS4
l Port Flow
Port Flow
l Port+VLAN Flow
l Port+SVLAN Flow
N4EFS0, N5EFS0,
N2EFS4, N3EFS4,
N2EGS2, N3EGS2,
N2EMR0, N2EGR2
l Port Flow
N1EFS0A,
N1EMS2, N4EGS4
l Port Flow
Port Flow
l Port+VLAN Flow
l Port+VLAN+Priority
Port Flow
l Port+VLAN Flow
l Port+VLAN+Priority
l Port+SVLAN Flow
N1EAS2
l Port Flow
Port Flow
l Port+VLAN Flow
l Port+SVLAN Flow
l Port+CVLAN+SVLAN Flow
Issue 03 (2012-11-30)
Value
Description
Port Flow
1064
A List of Parameters
Value
Description
Port+VLAN Flow
Port+VLAN+Priority
All the packets that enter from the specified port, and
whose VLAN Tag VID and priority are consistent
with the specified VID and priority, are regarded as
one flow.
Port+SVLAN Flow
All the packets that enter from the specified port, and
whose SVLAN VID is consistent with the specified
VID, are regarded as one flow.
Port+CVLAN+SVLAN Flow
All the packets that enter from the specified port, and
whose SVLAN VID and CVLAN VID are consistent
with the specified VID, are regarded as one flow.
Configuration Guidelines
Based on the required QoS and service type, set a proper value for Flow Type.
Values
Value Range
Default Value
Created CAR ID
Configuration Guidelines
The created flow can be bound with the created CAR policy only. For this reason, you can select
the value from the created CAR ID.
Issue 03 (2012-11-30)
1065
A List of Parameters
Values
Value Range
Default Value
Created CoS ID
Configuration Guidelines
The created flow can be bound with the created CoS policy only. For this reason, you can select
the value from the created CoS ID.
Values
The value ranges for each type of board is as follows:
Issue 03 (2012-11-30)
1066
A List of Parameters
Board Name
Value Range
Default Value
1-4095
1-2048
Configuration Guidelines
You can set this parameter to any value within the value range as required. Each CAR has a
unique CAR ID.
Values
Valid Value
Default Value
Enabled, Disabled
Disabled
Issue 03 (2012-11-30)
Value
Description
Disabled
Enabled
1067
A List of Parameters
Configuration Guidelines
You can set this parameter to Enabled or Disabled, depending on whether the CAR is required
to limit the traffic volume.
Values
Board Name
Value Range
Default
Value
Unit
N1EAS2
kbit/s
N1EMS4, N1EGS4,
N3EGS4, N4EGS4
kbit/s
N4EFS0, N5EFS0,
N2EFS4, N3EFS4,
N2EGS2, N3EGS2,
N2EMR0, N2EGR2,
N1EFS0A,
N1EMS2
kbit/s
Configuration Guidelines
Based on the actual QoS requirement, set a proper value for Committed Information Rate.
Generally, the value of Committed Information Rate is not less than the expected average rate
for transmitting the flow.
1068
A List of Parameters
Values
Board Name
Value Range
Default Value
Unit
N1EAS2
0-2048
Kbyte
N4EFS0, N5EFS0,
N2EFS4, N3EFS4,
N2EGS2, N3EGS2,
N1EFS0A, N1EMS2,
N2EMR0, N2EGR2
0-128
Kbyte
N1EMS4, N1EGS4,
N3EGS4, N4EGS4
0-32
Kbyte
Configuration Guidelines
Based on the actual QoS requirements, set a proper value for Committed Burst Size.
Generally, the value of Committed Burst Size is not less than the possible size of expected
burst data flow to be transmitted.
1069
A List of Parameters
If the traffic volume is greater than the value of Peak Information Rate, the excessive traffic
is discarded.
Values
Board Name
Value Range
Default Value
Unit
N1EAS2
An integer of 0-10485760, in
step length of 64
kbit/s
N1EMS4,
N1EGS4, N3EGS4,
N4EGS4
kbit/s
N4EFS0, N5EFS0,
N2EFS4, N3EFS4,
N2EGS2, N3EGS2,
N2EMR0,
N2EGR2,
N1EFS0A,
N1EMS2
kbit/s
Configuration Guidelines
Based on the actual QoS requirement, you can set a proper value for Peak Information Rate.
The value of Peak Information Rate should not be less than the guarantee bandwidth.
Generally, the value of Peak Information Rate is not greater than the expected maximum rate
of transmitting the flow.
Issue 03 (2012-11-30)
1070
A List of Parameters
Values
Board Name
Value Range
Default
Value
Unit
N1EAS2
0-2048
kbyte
0-32
kbyte
kbyte
Configuration Guidelines
Based on the actual requirement of QoS, you can set a proper value for Maximum Burst
Size.
Generally, the value of Maximum Burst Size is not greater than the size of burst data flow to
be transmitted.
Values
The value ranges for each type of board is as follows:
Issue 03 (2012-11-30)
Board Name
Value Range
Default Value
1-65535
1-8192
1071
A List of Parameters
Configuration Guidelines
You can set this parameter to any value in the value range as required. A CoS maps a CoS ID.
Values
Board Name
Value Range
Default Value
N1EAS2, N2EFS0,
N2EGS2
Simple
N1EMS4, N1EGS4,
N3EGS4, N4EGS4,
N5EFS0, N3EFS4,
N3EGS2, N2EMR0,
N2EGR2, N1EFS0A,
N1EMS2, N2EFS4,
N4EFS0
Simple
Issue 03 (2012-11-30)
Value
Description
Simple
VLAN priority
IPTOS
DSCP
1072
A List of Parameters
Configuration Guidelines
Based on the requirements of QoS, set a proper value for CoS Type.
Values
For the CoS of the simple type, follow Table A-6 to set a simple CoS Priority.
Table A-6 CoS priority of the simple type
Data Board
CoS
Parameter
Value Range of
CoS Parameter
Value Range of
CoS Priority
Default Value
of CoS
Priority
N4EFS0,
N2EFS4,
N2EGS2,
N1EFS0A,
N1EMS2,
N1EAS2,
N3EGS2,
N3EGS4,
N4EGS4
Invalid
Invalid
0-7
N5EFS0,
N1EMS4,
N1EGS4
Invalid
Invalid
0-3
N3EFS4
Invalid
Invalid
0-1
N2EMR0,
N2EGR2
Invalid
Invalid
A/B/C
For the CoS of the VLAN Priority type, follow Table A-7 to set the mapping from VLAN
Priority to CoS Priority.
Issue 03 (2012-11-30)
1073
A List of Parameters
CoS
Parameter
Value Range of
CoS Parameter
Value Range of
CoS Priority
Default Value
of CoS
Priority
N4EFS0,
N2EFS4,
N2EGS2,
N1EFS0A,
N1EMS2
User priority
in VLAN
0-7
0-7
Value range of
the mapping
CoS priority: -
N1EAS2
User priority
in VLAN
0-7
0-7
N3EGS2
User priority
in VLAN
0-7
0-7
N5EFS0,
N1EMS4,
N1EGS4,
N3EGS4,
N4EGS4
User priority
in VLAN
0-7
0-3
N3EFS4
User priority
in VLAN
0-7
0-1
N2EMR0,
N2EGR2
User priority
in VLAN
0-7
A/B/C
For the CoS of the IPTOS type, follow Table A-8 to set the mapping from IPTOS Priority to
CoS Priority.
Table A-8 CoS priority of the IPTOS type
Issue 03 (2012-11-30)
Data Board
CoS
Parameter
Value Range of
CoS Parameter
Value Range of
CoS Priority
Default Value
of CoS
Priority
N4EFS0,
N2EFS4,
N2EGS2,
N1EFS0A,
N1EMS2,
N3EGS2
IPTOS
0000-1111 (in
binary)
0-7
N5EFS0,
N1EMS4,
N1EGS4,
N3EGS4,
N4EGS4
IPTOS
0000-1111 (in
binary)
0-3
1074
A List of Parameters
Data Board
CoS
Parameter
Value Range of
CoS Parameter
Value Range of
CoS Priority
Default Value
of CoS
Priority
N3EFS4
IPTOS
0000-1111 (in
binary)
0-1
N2EMR0,
N2EGR2
IPTOS
0000-1111 (in
binary)
A/B/C
For the CoS of the DSCP type, follow Table A-9 to set the mapping from DSCP Priority to
CoS Priority.
Table A-9 CoS priority of the DSCP type
Data Board
CoS
Parameter
Value Range of
CoS Parameter
Value Range of
CoS Priority
Default Value
of CoS
Priority
N4EFS0,
N2EFS4,
N2EGS2,
N1EFS0A,
N1EMS2,
N3EGS2
DSCP
000000-111111 (in
binary)
0-7
N5EFS0,
N1EMS4,
N1EGS4,
N3EGS4,
N4EGS4
DSCP
000000-111111 (in
binary)
0-3
N3EFS4
DSCP
000000-111111 (in
binary)
0-1
N2EMR0,
N2EGR2
DSCP
000000-111111 (in
binary)
A/B/C
Configuration Guidelines
Based on the requirements, you can map the packets into different queues by setting CoS
Priority.
If CoS Type is set to VLAN Priority, IPTOS or DSCP, generally, you can map the packets
into the proper CoS Priority according to the priority information contained in the packets.
At the application layer, if a service (for example, VOIP, video conference, video conferencing
call, and video on demand) has higher requirements for QoS, set a higher priority for the service
to get better bandwidth and service guarantee. To ensure good bandwidth multiplexing, be sure
to avoid a larger ratio of real-time services in the network. For a service (for example, Internet
access, E-Mail, and FTP) that has lower requirements for QoS, set a lower priority for the service
to provide better bandwidth sharing and contention mechanism.
Issue 03 (2012-11-30)
1075
A List of Parameters
For the N1EAS2 board, queue 7 has the absolute priority. That is, if queue 7 is congested, all
the queues must wait. Generally, queue 7 is used for the protocol or management packets of light
traffic. The remaining bandwidth is allocated in proportion for other queues. Queue 0 has the
lowest priority. For a higher priority, no bandwidth is used if no service is available.
A.11.13 Shaping
Description
Shaping indicates the flow shaping function of all the four queues of every port on the N1EMS4,
N1EGS4, N3EGS4. or N4EGS4. This parameter includes the guaranteed bandwidth and capacity
of extra burst cache.
For the N1EAS2, every port has eight queues, and every queue has flow shaping function. The
parameter to set includes the guaranteed bandwidth and peak bandwidth.
Values
Name
Value Range
Default Value
Port List
l Enabled
Disabled
l Disabled
CIR
1076
A List of Parameters
Name
Value Range
Default Value
DMBS
Configuration Guidelines
The user can set this parameter according to the actual bandwidth rate of the output port.
Values
Value Range
Default Value
Enabled, Disabled
Enabled
Configuration Guidelines
Set this parameter to the same value for the equipment at the local and opposite ends. That is, if
scrambling is performed on the payload of cells transmitted by the equipment at the opposite
end, the payload of cells transmitted by the equipment at the local end must also be scrambled.
Generally, set this parameter to the default value (Enabled).
1077
A List of Parameters
Related Information
Scrambling is another type of coding technology that is commonly used for serial link
transmission. The purpose of scrambling is to suppress consecutive "1" bits and consecutive "0"
bits to facilitate extraction of clock signals from line signals. Line signals are scrambled only.
Therefore, the rate of SDH line signals is the same as that of standard signals at SDH electrical
interface, and this does not add extra optical power penalty to the laser at the transmit end.
An ATM cell is the basic carrier for transmitting ATM information. An ATM cell, which consists
of only 53 bytes, is divided into a 5-byte header and a 48-byte payload.
Values
Value Range
Default Value
Configuration Guidelines
Set this parameter according to the VPI values required for a port. For example, if a port needs
to converge 500 PVCs and if each PVC has a unique VPI, 500 VPIs are required for service
identification. Therefore, ensure that the offset value between the maximum number of VPIs
and the minimum number of VPIs is greater than 500.
For the TNN1D75E and TNN1D12E boards, this parameter cannot be set to the minimum
number of VPIs. That is, the number of VPIs is not limited.
For the TNN1AFO1 board, the sum of the minimum numbers of VPIs of all ports is less than
4096.
1078
A List of Parameters
Related Information
The operations on an ATM network are similar to a call connection process. Before a
conversation, a virtual channel connection (VCC), which is identified with a VPI/a virtual
channel identifier (VCI), must be set up between the source end and the destination end.
l
VP switching: changes only VPI values and transparently transmits VCI values in the
switching process.
VC switching: changes VPI values and VCI values in the switching process.
Values
Value Range
Default Value
0-65535
65535
Configuration Guidelines
Set this parameter according to the number of VCIs required for a port. For example, if a port
needs to converge 500 PVCs and if each PVC has a unique VCI, 500 VCIs are required for
service identification. Therefore, ensure that the offset value between the maximum number of
VCIs and the minimum number of VCIs is greater than 500.
For the TNN1D75E and TNN1D12E boards, this parameter cannot be set to the minimum
number of VCIs. That is, the number of VCIs is not limited.
This parameter can be set to other values only when the number of VPIs that are applicable to
VCCs at a port is not 65535.
Issue 03 (2012-11-30)
1079
A List of Parameters
Related Information
The operations on an ATM network are similar to a call connection process. Before a
conversation, a virtual channel connection (VCC), which is identified with a virtual path
identifier (VPI)/a VCI, must be set up between the source end and the destination end.
l
VP switching: changes only VPI values and transparently transmits VCI values in the
switching process.
VC switching: changes VPI values and VCI values in the switching process.
Values
Board Name
Value Range
Default Value
TNN1AFO1
0-256, 65535
65535
Configuration Guidelines
Set this parameter according to the number of VCIs required for each VPI at a port.
For example, if 50 PVCs need to be established at a port and if each PVC has a unique VPI, 50
VPIs are required for service identification. Therefore, set the number of VPIs available for VC
switching to 50. The value 65535 indicates that the number of VPIs for establishment of a VC
connection is not limited.
If any connections are available at a port, this parameter is unavailable.
Issue 03 (2012-11-30)
1080
A List of Parameters
Related Information
The operations on an ATM network are similar to a call connection process. Before a
conversation, a virtual channel connection (VCC), which is identified with a VPI/a VCI, must
be set up between the source end and the destination end.
l
VP switching: changes only VPI values and transparently transmits VCI values in the
switching process.
VC switching: changes VPI values and VCI values in the switching process.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
128
1081
A List of Parameters
Configuration Guidelines
The TNN1D75E and TNN1D12E boards support the IMA function. Therefore, set the length of
IMA frames to be the same as that of the IMA frames for the interconnected boards.
For example, if IMA groups on two NEs are connected to each other, set the length of IMA
frames transmitted at both the local and opposite ends to 128.
Related Information
None.
Values
Value Range
Default Value
Issue 03 (2012-11-30)
Value
Description
Symmetrical Mode
and Symmetrical
Operation
Indicates that all links in the IMA group are configured with the
capabilities of receiving and transmitting services in two directions
when you configure an IMA group (in symmetrical configuration
mode). If services on a link are simultaneously interrupted in the
receive and transmit directions, this link is bidirectionally unavailable
(in symmetrical operation mode).
1082
A List of Parameters
Value
Description
Symmetrical Mode
and Asymmetrical
Operation
Indicates that all links in the IMA group are configured with the
capabilities of receiving and transmitting services in two directions
when you configure an IMA group (in symmetrical configuration
mode). If services on a link are interrupted in one direction but available
in the other direction, this link is unidirectionally available (in
asymmetrical operation mode).
Asymmetrical
Mode and
Asymmetrical
Operation
Indicates that some links in the IMA group are configured with only
the capabilities of receiving and transmitting services in one direction
when you configure an IMA group (in asymmetrical configuration
mode). If services on a link are interrupted in one direction but available
in the other direction, this link is unidirectionally available (in
asymmetrical operation mode).
Configuration Guidelines
For the TNN1D75E and TNN1D12E boards, this parameter can be set only to Symmetrical
Mode and Symmetrical Operation.
Related Information
None.
1083
A List of Parameters
the transmit and receive rates of the IMA group decrease, and the transmission delay of the IMA
group may increase.
Values
Value Range
Default Value
1-120
25
Configuration Guidelines
Set this parameter properly according to the requirements for link delay and service delay.
It is recommended that you take the default value.
Related Information
None.
An incorrect setting of this parameter may result in service interruption. If the number of active
links in the transmit direction of an IMA group is less than the bandwidth specified for boards
but equal to or greater than the specified minimum number of active links, the IMA group
functions properly. If the number of active links in the transmit direction of an IMA group is
less than the specified minimum number of active links, services of the IMA group are
interrupted.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
1-16
1084
A List of Parameters
Configuration Guidelines
Set the minimum number of active links, depending on the equipment at the opposite end. For
example, 10 active links are currently available, the total bandwidth is 20 Mbit/s, and the
interconnected equipment requires a minimum of 15 Mbit/s bandwidth. To meet the preceding
requirements, set the minimum number of active links to 8. If the minimum number of active
links is less than 8, the bandwidth is less than 15 Mbit/s. As a result, services in the IMA group
are interrupted.
For the TNN1D75E and TNN1D12E boards, configure a maximum of 16 IMA links for an IMA
group due to hardware constraints. Therefore, the minimum number of active links in the transmit
direction cannot be more than 16.
Related Information
None.
Values
Value Range
Default Value
1-16
Configuration Guidelines
Set the minimum number of active links, depending on the equipment at the opposite end. For
the TNN1D75E and TNN1D12E boards, configure a maximum of 16 IMA links for an IMA
Issue 03 (2012-11-30)
1085
A List of Parameters
group due to hardware constraints. Therefore, the minimum number of active links in the receive
direction cannot be more than 16.
You can specify the minimum number of active links based on the actual service transmission.
For example, 10 active links are currently available, the total bandwidth is 20 Mbit/s, and the
interconnected equipment requires a minimum of 15 Mbit/s bandwidth. To meet the preceding
requirements, set the minimum number of active links to 8. If the minimum number of active
links is less than 8. the bandwidth is less than 15 Mbit/s. As a result, services in the IMA group
are interrupted.
Related Information
None.
Values
Value Range
Default Value
CTC Mode
Issue 03 (2012-11-30)
Value
Description
CTC Mode
Indicates the common transmit clock (CTC) mode in which the same
clock source is used for all links in an IMA group.
ITC Mode
1086
A List of Parameters
Configuration Guidelines
The clock modes must be the same at both ends of an IMA group.
Related Information
None.
Values
Value Range
Default Value
Start-Up
Issue 03 (2012-11-30)
Value Range
Description
Start-Up
Start-Up-ACK
Config-Aborted
Insufficient-Links
1087
A List of Parameters
Value Range
Description
Operational
Configuration Guidelines
None.
Related Information
None.
Values
Value Range
Default Value
Unknown
Issue 03 (2012-11-30)
Value
Description
Unknown
Indicates that the delay value of links is not computed. If an IAM group
is configured or if the equipment at the local end cannot receive
information from links at the opposite end, the delay on links cannot
be computed.
Valid
Invalid
1088
A List of Parameters
Configuration Guidelines
None.
Related Information
None.
Values
Value Range
Default Value
Description
PVP
PVC
Port Transparent
Configuration Guidelines
If services that contain the same VPI need to be converged at a port, set this parameter to
PVP.
Issue 03 (2012-11-30)
1089
A List of Parameters
If services that contain the same VPI and VCI need to be transmitted, set this parameter to
PVC. A permanent virtual channel (PVC) and a permanent virtual path (PVP) are virtual
transmission paths, but a PVC is contained in a PVP.
Related Information
A PVC and a PVP are virtual transmission paths, but a PVC is contained in a PVP.
l
VP switching: changes only VPI values and transparently transmits VCI values in the
switching process.
VC switching: changes VPI values and VCI values in the switching process.
Values
Value Range
Default Value
1-256
Configuration Guidelines
An uplink policy determines the traffic parameters and QoS parameters in a specified direction
of an ATM connection. An uplink policy is also used for setting the forwarding priority of QoS
parameters of a network, based on the characteristics of data transmitted on the ATM connection.
For services that require high transmission quality, select an ATM policy that is preferred to
ensure proper data transmission.
Issue 03 (2012-11-30)
1090
A List of Parameters
Related Information
None.
Values
Value Range
Default Value
1-256
None.
Configuration Guidelines
A downlink policy determines the traffic parameters and QoS parameters in a specified direction
of an ATM connection. A downlink policy is also used for setting the forwarding priority of
QoS parameters of a network, based on the characteristics of data transmitted on the ATM
connection. For services that require high transmission quality, select an ATM policy that is
preferred to ensure proper data transmission.
Related Information
None.
Issue 03 (2012-11-30)
1091
A List of Parameters
Values
Value
Default Value
2-8
1(DefaultAtmCosMap)
Configuration Guidelines
The ATM service class mapping table specifies mapping between ATM services and CoS
priorities to provide differentiated assurance of service quality. You can adopt the default or
user-defined ATM service class mapping table. You can define a maximum of seven ATM
service class mapping tables.
Related Information
None.
1092
A List of Parameters
interrupted. For the other four types of services, a certain amount of buffer size is allocated, and
the burst services whose transmission rates are greater than the PCR are discarded only when
they exceed the buffer size.
Values
For different service types, available traffic types are different.
Service Type
Value Range
Default Value
CBR
NoClpNoScr
NoClpNoScr
ClpNoTaggingNoScr
ClpTaggingNoScr
ClpTransparentNoScr
NoClpNoScrCdvt
rt-VBR
ClpTransparentScr
ClpTransparentScr
NoClpScrCdvt
ClpNoTaggingScrCdvt
ClpTaggingScrCdvt
nrt-VBR
NoClpScr
NoClpScr
ClpNoTaggingScr
ClpTaggingScr
UBR
NoTrafficDescriptor
NoTrafficDescriptor
NoClpNoScr
NoClpTaggingNoScr
NoClpNoScrCdvt
UBR+
atmNoTrafficDescriptorMcr
atmNoTrafficDescriptorMcr
atmNoClpMcr
atmNoClpMcrCdvt
Configuration Guidelines
For CBR services, this parameter is set as follows:
Issue 03 (2012-11-30)
1093
A List of Parameters
Traffic
Type
CDVT
Supporte
d
Handlin
g Cells
With
CLPs
Being 0
and 1
Different
ly
Cell
Labeling
Supporte
d
PCR
Supporte
d
MBS
Supporte
d
SCR
Supporte
d
NoClpNo
Scr
No
No
No
Yes
No
No
ClpNoTag
gingNoScr
No
Yes
No
Yes
No
No
ClpTaggin
gNoScr
No
Yes
Yes
Yes
No
No
ClpTransp
arentNoSc
r
Yes
No
No
Yes
No
No
NoClpNo
ScrCdvt
Yes
No
No
Yes
No
No
CDVT
Suppor
ted
Handli
ng Cells
With
CLPs
Being 0
and 1
Differe
ntly
Cell
Labelin
g
Suppor
ted
PCR
Suppor
ted
MCR
Suppor
ted
MBS
Suppor
ted
SCR
Suppor
ted
ClpTran
sparentS
cr
Yes
No
No
Yes
No
Yes
Yes
NoClpSc
rCdvt
Yes
No
No
Yes
No
Yes
Yes
ClpNoT
aggingS
crCdvt
Yes
Yes
No
Yes
No
Yes
Yes
ClpTaggingScr
Cdvt
Yes
Yes
Yes
Yes
No
Yes
Yes
1094
A List of Parameters
Traffic
Type
CDVT
Suppor
ted
Handli
ng Cells
With
CLPs
Being 0
and 1
Differe
ntly
Cell
Labelin
g
Suppor
ted
PCR
Suppor
ted
MCR
Suppor
ted
MBS
Suppor
ted
SCR
Suppor
ted
NoClpSc
r
No
No
No
Yes
No
Yes
Yes
ClpNoT
aggingS
cr
No
Yes
No
Yes
No
Yes
Yes
ClpTagg
ingScr
No
Yes
Yes
Yes
No
Yes
Yes
CDVT
Suppor
ted
Handli
ng Cells
With
CLPs
Being 0
and 1
Differe
ntly
Cell
Labelin
g
Suppor
ted
PCR
Suppor
ted
MCR
Suppor
ted
MBS
Suppor
ted
SCR
Suppor
ted
NoTraffi
cDescrip
tor
No
No
No
No
No
No
No
NoClpN
oScr
No
No
No
Yes
No
No
No
NoClpT
aggingN
oScr
Yes
Yes
Yes
Yes
No
No
No
NoClpN
oScrCdv
t
Yes
No
No
Yes
No
No
No
Issue 03 (2012-11-30)
1095
A List of Parameters
Traffic
Type
CDVT
Suppor
ted
Handli
ng Cells
With
CLPs
Being 0
and 1
Differe
ntly
Cell
Labelin
g
Suppor
ted
PCR
Suppor
ted
MCR
Suppor
ted
MBS
Suppor
ted
SCR
Suppor
ted
atmNoTr
afficDes
criptorM
cr
No
No
No
No
Yes
No
No
atmNoCl
pMcr
No
No
No
Yes
Yes
No
No
atmNoCl
pMcrCd
vt
Yes
Yes
Yes
Yes
Yes
No
No
NOTE
Related Information
None.
1096
A List of Parameters
Values
Value Range
Default Value
Unit
90-149078
None.
Cell/s
Configuration Guidelines
The parameter value must be not larger than the physical bandwidth of an ATM port or an inverse
multiplexing for ATM (IMA) group.
For example, the bandwidth (based on the number of ATM cells) of one E1 in an IAM group is
derived from the formula: (30 8 8000) / (53 8) ((M-1) / M) x (2048 / 2049).
l
The letter M indicates the frame length of an IMA group. According to the MA protocol,
an IMA control protocol (ICP) cell is inserted to every M-1 user cells. An ICP cell, which
is not a user cell, is used for transmission of IMA protocol information. In practice, the ICP
cell needs to be removed from the available bandwidth. The expression 2048 / 2049
indicates that one more ICP cell needs to be inserted to every 2048 cells.
If the frame length (M) of an IMA group is 128, the maximum number of cells derived
from the preceding formula is 4490 (rounded off to an integer). Therefore, the Clp0Pcr
value specified for an IMA group in which only one E1 service is available needs to be not
more than 4490.
If the IMA protocol is disabled for the E1 service, the bandwidth is derived from the
formula: (30 8 8000) / (53 8).
If the transmission rate at a port of AFO1 is STM-1, the bandwidth (based on the number
of ATM cells) is 149760 (rounded off to an integer).
Related Information
According to ATM Forum, ATM traffic is currently controlled by two-level token buckets.
Generally, a level-1 token bucket limits the PCR, and a level-2 token bucket limits the sustainable
cell rate (SCR). The two-level token buckets adopt the GCRA algorithm to control the traffic.
Issue 03 (2012-11-30)
1097
A List of Parameters
CLP0+1
CLP0+1
CLP0
Conforming
cells
Parameters for a
level-2 leaky bucket:
SCR
BT+CDVT
Non-conforming cells
Figure A-8 Structure of the ATM UNI cell header in the dual token bucket model
GFC (4)
VPI (4)
VPI (4)
VCI (4)
VCI (8)
VCI (4)
PT (3)
CLP (1)
HEC (8)
1098
A List of Parameters
Values
Value Range
Default Value
Unit
90-149078
None.
Cell/s
Configuration Guidelines
The parameter value must be not larger than the physical bandwidth of an ATM port or an inverse
multiplexing for ATM (IMA) group.
For example, the bandwidth (based on the number of ATM cells) of one E1 in an IAM group is
derived from the formula: (30 8 8000) / (53 8) ((M-1) / M) x (2048 / 2049).
l
The letter M indicates the frame length of an IMA group. According to the IMA protocol,
an ICP cell is inserted to every M-1 user cells. An IMA control protocol (ICP) cell, which
is not a user cell, is used for transmission of IMA protocol information. In practice, the ICP
cell needs to be removed from the available bandwidth. The expression 2048 / 2049
indicates that one more ICP cell needs to be inserted to every 2048 cells.
If the frame length (M) of an IMA group is 128, the maximum number of cells derived
from the preceding formula is 4490 (rounded off to an integer). Therefore, the Clp0Pcr
value specified for an IMA group in which only one E1 service is available needs to be not
more than 4490.
If the IMA protocol is disabled for the E1 service, the bandwidth is derived from the
formula: (30 8 8000) / (53 8).
If the transmission rate at a port of AFO1 is STM-1, the bandwidth (based on the number
of ATM cells) is 149760 (rounded off to an integer).
Related Information
According to ATM Forum, ATM traffic is currently controlled by two-level token buckets.
Generally, a level-1 token bucket limits the peak cell rate (PCR), and a level-2 token bucket
limits the SCR. The two-level token buckets adopt the GCRA algorithm to control the traffic.
Issue 03 (2012-11-30)
1099
A List of Parameters
CLP0+1
CLP0+1
CLP0
Conforming
cells
Parameters for a
level-2 leaky bucket:
SCR
BT+CDVT
Non-conforming cells
Figure A-10 Structure of the ATM UNI cell header in the dual token bucket model
GFC (4)
VPI (4)
VPI (4)
VCI (4)
VCI (8)
VCI (4)
PT (3)
CLP (1)
HEC (8)
1100
A List of Parameters
Values
Value Range
Default Value
Unit
90-149078
None.
Cell/s
Configuration Guidelines
The parameter value must be not larger than the physical bandwidth of an ATM port or an inverse
multiplexing for ATM (IMA) group.
For example, the bandwidth (based on the number of ATM cells) of one E1 in an IAM group is
derived from the formula: (30 8 8000) / (53 8) ((M-1) / M) x (2048 / 2049).
l
The letter M indicates the frame length of an IMA group. According to the IMA protocol,
an IMA control protocol (ICP) cell is inserted to every M-1 user cells. An ICP cell, which
is not a user cell, is used for transmission of IMA protocol information. In practice, the ICP
cell needs to be removed from the available bandwidth. The expression 2048 / 2049
indicates that one more ICP cell needs to be inserted to every 2048 cells.
If the frame length (M) of an IMA group is 128, the maximum number of cells derived
from the preceding formula is 4490 (rounded off to an integer). Therefore, the Clp0Pcr
value specified for an IMA group in which only one E1 service is available needs to be not
more than 4490.
If the IMA protocol is disabled for the E1 service, the bandwidth is derived from the
formula: (30 8 8000) / (53 8).
If the transmission rate at a port of AFO1 is STM-1, the bandwidth (based on the number
of ATM cells) is 149760 (rounded off to an integer).
Related Information
According to ATM Forum, ATM traffic is currently controlled by two-level token buckets.
Generally, a level-1 token bucket limits the PCR, and a level-2 token bucket limits the sustainable
cell rate (SCR). The two-level token buckets adopt the GCRA algorithm to control the traffic.
Issue 03 (2012-11-30)
1101
A List of Parameters
CLP0+1
CLP0+1
CLP0
Conforming
cells
Parameters for a
level-2 leaky bucket:
SCR
BT+CDVT
Non-conforming cells
Figure A-12 Structure of the ATM UNI cell header in the dual token bucket model
GFC (4)
VPI (4)
VPI (4)
VCI (4)
VCI (8)
VCI (4)
PT (3)
CLP (1)
HEC (8)
1102
A List of Parameters
Values
Value Range
Default Value
Unit
90-149078
None.
Cell/s
Configuration Guidelines
The parameter value must be not larger than the physical bandwidth of an ATM port or an inverse
multiplexing for ATM (IMA) group.
For example, the bandwidth (based on the number of ATM cells) of one E1 in an IAM group is
derived from the formula: (30 8 8000) / (53 8) ((M-1) / M) x (2048 / 2049).
l
The letter M indicates the frame length of an IMA group. According to the IMA protocol,
an IMA control protocol (ICP) cell is inserted to every M-1 user cells. An ICP cell, which
is not a user cell, is used for transmission of IMA protocol information. In practice, the ICP
cell needs to be removed from the available bandwidth. The expression 2048 / 2049
indicates that one more ICP cell needs to be inserted to every 2048 cells.
If the frame length (M) of an IMA group is 128, the maximum number of cells derived
from the preceding formula is 4490 (rounded off to an integer). Therefore, the Clp0Pcr
value specified for an MA group in which only one E1 service is available needs to be not
more than 4490.
If the IMA protocol is disabled for the E1 service, the bandwidth is derived from the
formula: (30 8 8000) / (53 8).
If the transmission rate at a port of AFO1 is STM-1, the bandwidth (based on the number
of ATM cells) is 149760 (rounded off to an integer).
Related Information
According to ATM Forum, ATM traffic is currently controlled by two-level token buckets.
Generally, a level-1 token bucket limits the peak cell rate (PCR), and a level-2 token bucket
limits the SCR. The two-level token buckets adopt the GCRA algorithm to control the traffic.
Issue 03 (2012-11-30)
1103
A List of Parameters
CLP0+1
CLP0+1
CLP0
Conforming
cells
Parameters for a
level-2 leaky bucket:
SCR
BT+CDVT
Non-conforming cells
Figure A-14 Structure of the ATM UNI cell header in the dual token bucket model
GFC (4)
VPI (4)
VPI (4)
VCI (4)
VCI (8)
VCI (4)
PT (3)
CLP (1)
HEC (8)
1104
A List of Parameters
Values
Value Range
Default Value
Unit
566-32664
None.
Cell/s
Configuration Guidelines
The parameter value must be not larger than the physical bandwidth of an ATM port or an inverse
multiplexing for ATM (IMA) group.
For example, the bandwidth (based on the number of ATM cells) of one E1 in an IAM group is
derived from the formula: (30 8 8000) / (53 8) ((M-1) / M) x (2048 / 2049).
l
The letter M indicates the frame length of an IMA group. According to the IMA protocol,
an ICP cell is inserted to every M-1 user cells. An ICP cell, which is not a user cell, is used
for transmission of IMA protocol information. In practice, the ICP cell needs to be removed
from the available bandwidth. The expression 2048 / 2049 indicates that one more ICP cell
needs to be inserted to every 2048 cells.
If the frame length (M) of an IMA group is 128, the maximum number of cells derived
from the preceding formula is 4490 (rounded off to an integer). Therefore, the Clp0Pcr
value specified for an IMA group in which only one E1 service is available needs to be not
more than 4490.
If the IMA protocol is disabled for the E1 service, the bandwidth is derived from the
formula: (30 8 8000) / (53 8).
If the transmission rate at a port of AFO1 is STM-1, the bandwidth (based on the number
of ATM cells) is 149760 (rounded off to an integer).
Related Information
According to ATM Forum, ATM traffic is currently controlled by two-level token buckets.
Generally, a level-1 token bucket limits the peak cell rate (PCR), and a level-2 token bucket
limits the sustainable cell rate (SCR). The two-level token buckets adopt the GCRA algorithm
to control the traffic.
Issue 03 (2012-11-30)
1105
A List of Parameters
CLP0+1
CLP0+1
CLP0
Conforming
cells
Parameters for a
level-2 leaky bucket:
SCR
BT+CDVT
Non-conforming cells
Figure A-16 Structure of the ATM UNI cell header in the dual token bucket model
GFC (4)
VPI (4)
VPI (4)
VCI (4)
VCI (8)
VCI (4)
PT (3)
CLP (1)
HEC (8)
1106
A List of Parameters
Values
Value Range
Default Value
Unit
2-200000
None.
cell
Configuration Guidelines
The greater the parameter value, the deeper the level-2 token bucket on the ATM path, and the
better performance the service for burst cells. Therefore, if conditions are allowed, you can set
the maximum burst size to a greater value.
Related Information
Token bucket algorithm: The equipment constantly monitors the rate of burst cells. If the rate
of burst cells exceeds the specified value (1/I), the equipment computes and accumulates the
offset value between the theoretical arrival time of cells and the actual arrival time of cells. If
the accumulated offset value is within the specified value (1/I), the equipment considers that the
cells are conforming cells. If the accumulated offset value exceeds the specified value (L), the
equipment considers that the cells are non-conforming cells, which are handled according to the
methods specified in the contract. That is, non-conforming cells are discarded (based on the CLP
values or regardless of the CLP values) or added with tagging labels.
According to ATM Forum, ATM traffic is currently controlled by two-level token buckets.
Generally, a level-1 token bucket limits the peak cell rate (PCR), and a level-2 token bucket
limits the sustainable cell rate (SCR). The two-level token buckets adopt the GCRA algorithm
to control the traffic.
l
For a level-1 token bucket, the value I is 1/PCR, and the value L is CDVT. If the rate of
burst cells exceeds the specified PCR value, the equipment computes and accumulates the
offset value between the theoretical arrival time of cells and the actual arrival time of cells.
If the accumulated offset value exceeds the specified value L (the CDTV value for a level-1
token bucket), the cells are discarded.
For a level-2 token bucket, the value I is 1/SCR, and the value L is derived from the formula:
BT + CDVT. The equipment performs the same operations as those for a level-1 token
bucket, except that the parameters are different. The value BT is derived from the formula:
(MBS - 1)(1/SCR - 1/PCR).
1107
A List of Parameters
Values
Value Range
Default Value
Unit
7 to 13300000
None.
0.1us
Configuration Guidelines
The greater the parameter value, the better performance the service for burst cells. If conditions
are allowed, you can set CDVT to a large value to minimize packet loss.
Related Information
Token bucket algorithm: The equipment constantly monitors the rate of burst cells. If the rate
of burst cells exceeds the specified value (1/I), the equipment computes and accumulates the
offset value between the theoretical arrival time of cells and the actual arrival time of cells. If
the accumulated offset value is within the specified value (1/I), the equipment considers that the
cells are conforming cells. If the accumulated offset value exceeds the specified value (L), the
equipment considers that the cells are non-conforming cells, which are handled according to the
methods specified in the contract. That is, non-conforming cells are discarded (based on the cell
loss priority (CLP) values or regardless of the CLP values) or added with tagging labels.
According to ATM Forum, ATM traffic is currently controlled by two-level token buckets.
Generally, a level-1 token bucket limits the peak cell rate (PCR), and a level-2 token bucket
limits the sustainable cell rate (SCR). The two-level token buckets adopt the GCRA algorithm
to control the traffic.
l
For a level-1 token bucket, the value I is 1/PCR, and the value L is CDVT. If the rate of
burst cells exceeds the specified PCR value, the equipment computes and accumulates the
offset value between the theoretical arrival time of cells and the actual arrival time of cells.
If the accumulated offset value exceeds the specified value L (the CDTV value for a level-1
token bucket), the cells are discarded.
For a level-2 token bucket, the value I is 1/SCR, and the value L is derived from the formula:
BT + CDVT. The equipment performs the same operations as those for a level-1 token
bucket, except that the parameters are different. The value BT is derived from the formula:
(MBS - 1)(1/SCR - 1/PCR).
Issue 03 (2012-11-30)
1108
A List of Parameters
Values
Value Range
Default Value
Disabled, Enabled
Disabled
Configuration Guidelines
The traffic monitoring function is classified into UPC and NPC by interface position.
The UPC function is available for UNI-NNI interfaces, whereas the NPC function is available
for NNI-NNI interfaces. The UPC and NPC functions are similar and used for processing nonconforming packets during a communication (for example, the number of burst packets exceeds
the specified value).
The UPC and NPC functions process packets in three modes: transmitting conforming packets;
labeling non-conforming packets (setting the CLP field in the packets to the value 1) and
lowering their forwarding priorities; and discarding non-conforming packets immediately.
Related Information
According to asynchronous transfer mode (ATM) Forum, ATM traffic is currently controlled
by two-level token buckets. Generally, a level-1 token bucket limits the peak cell rate ( PCR),
and a level-2 token bucket limits the sustainable cell rate (SCR). The two-level token buckets
adopt the GCRA algorithm to control the traffic.
Issue 03 (2012-11-30)
1109
A List of Parameters
CLP0+1
CLP0+1
CLP0
Conforming
cells
Parameters for a
level-2 leaky bucket:
SCR
BT+CDVT
Non-conforming cells
Values
Value Range
Default Value
Source, Sink
Source
Configuration Guidelines
Select a proper direction of the connection as required at the end where ATM OAM cells are
identified and processed.
Issue 03 (2012-11-30)
1110
A List of Parameters
Related Information
An ATM connection has a source end and a sink end. When setting the segment point and
endpoint attributes, specify the direction of the ATM connection. That is, specify whether to set
the segment and end attributes for the source or sink end of the ATM connection.
Figure A-18 Setting a segment point
A segment
point is set.
Connection ID = 1
Source end
Sink end
NE
After you specify the segment and end points in a direction of an ATM connection for an NE,
the NE can identify and process ATM OAM cells in the receive direction of the port where the
segment and end points are specified. For example, as shown in Figure A-18, if a segment point
is specified for the source end of connection 1, the NE can identify and process ATM OAM cells
transmitted between two segment points in the receive direction at the source end.
Values
Value Range
Default Value
1111
A List of Parameters
Value
Description
Endpoint
Segment point
Segment and
Endpoint
Configuration Guidelines
According to different OAM maintenance segments, as shown in Figure A-19, set the segment
and end attributes properly.
Figure A-19 Schematic diagram of a maintenance segment
1
End
2
Seg
3
Inner
4
Inner
5
Seg + End
Segment
User 1
End to end
User 2
1.
When enabling the ATM OAM function for a maintenance segment, set the endpoint
attribute for the boundary of the maintenance segment. For example, as shown in Figure
A-19, separately set the endpoint attribute for points 1 and 2 of the maintenance segment
of user 1.
2.
Issue 03 (2012-11-30)
1112
3.
A List of Parameters
When working as an end point and a segment point at the same time, a point needs to be a
segment-end point, such as point 5 shown in Figure A-19.
Related Information
An ATM connection has a source end and a sink end. When setting the segment point and
endpoint attributes, specify the direction of the ATM connection. That is, specify whether to set
the segment and end attributes for the source or sink end of the ATM connection.
Figure A-20 Setting a segment point
A segment
point is set.
Connection ID = 1
Source end
Sink end
NE
After you specify the segment and end points in a direction of an ATM connection for an NE,
the NE can identify and process ATM OAM cells in the receive direction of the port where the
segment and end points are specified. For example, as shown in Figure A-20, if a segment point
is specified for the source end of connection 1, the NE can identify and process ATM OAM cells
transmitted between two segment points in the incoming direction at the source end.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
[2BYTE]00 00-ff ff
[2BYTE]00 00
1113
A List of Parameters
Configuration Guidelines
The binary coded decimal (BCD) is used for a country code. For example, 0x0086 is used for
China, 0x0001 for U.S.A., and 0x0044 for U. K.
The default value is 0x0000, which is recommended.
Related Information
According to ITU-T I.610, an LLID is specified in multiple coding methods and the first byte
of the LLID is used to identify the coding method.
In the GUI interface of a network management system, an LLID is specified in coding method
2 (the first byte is 0x01) described in ITU-T I.610, and a window is available for setting the other
15 bytes.
The sequence and meaning of each filed are described as follows:
2-byte country code (the BCD is defaulted to be 0x0086) + 2-byte network code (the BCD is
defaulted to 0x0000) + 11-byte NE code (the first four bytes are defaulted to be an NE ID, and
the last seven bytes are defaulted to be 0s)
CAUTION
After being specified, an NE code that is different from an NE ID (that is, the first four bytes
are not the same as an NE ID, and the other seven bytes are not 0s) remains unchanged regardless
of the NE ID.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
[2BYTE]00 00-ff ff
[2BYTE]00 00
1114
A List of Parameters
Configuration Guidelines
The binary coded decimal (BCD) is used for a network code. You can set a network code as
required within the specified value range.
The default network code is 0x0000. It is recommended that you set the network code to 0x0000.
Related Information
According to ITU-T I.610, an LLID is specified in multiple coding methods and the first byte
of the LLID is used to identify the coding method.
In the GUI interface of a network management system, an LLID is specified in coding method
2 (the first byte is 0x01) described in ITU-T I.610, and a window is available for setting the other
15 bytes.
The sequence and meaning of each filed are described as follows:
2-byte country code (the BCD is defaulted to be 0x0086) + 2-byte network code (the BCD is
defaulted to 0x0000) + 11-byte NE code (the first four bytes are defaulted to be an NE ID, and
the last seven bytes are defaulted to be 0s)
CAUTION
After being specified, an NE code that is different from an NE ID (that is, the first four bytes
are not the same as an NE ID, and the other seven bytes are not 0s) remains unchanged regardless
of the NE ID.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
[11BYTE]00 00 00 00 00 00 00 00 00 00 00ff ff ff ff ff ff ff ff ff ff ff
4-byte NE ID + 7-byte 0s
1115
A List of Parameters
Configuration Guidelines
If an NE on a network has a unique LLID, set the values of the 11 bytes as required.
The first four bytes are defaulted to be a network ID, and the last seven bytes are 0s. It is
recommended that you set the network code to the default value.
Related Information
According to ITU-T I.610, an LLID is specified in multiple coding methods and the first byte
of the LLID is used to identify the coding method.
In the GUI interface of a network management system, an LLID is specified in coding method
2 (the first byte is 0x01) described in ITU-T I.610, and a window is available for setting the other
15 bytes.
The sequence and meaning of each filed are described as follows:
2-byte country code (the BCD is defaulted to be 0x0086) + 2-byte network code (the BCD is
defaulted to 0x0000) + 11-byte NE code (the first four bytes are defaulted to be an NE ID, and
the last seven bytes are defaulted to be 0s)
CAUTION
After being specified, an NE code that is different from an NE ID (that is, the first four bytes
are not the same as an NE ID, and the other seven bytes are not 0s) remains unchanged regardless
of the NE ID.
Values
For N1ADQ1/N1ADL4 boards:
Issue 03 (2012-11-30)
1116
A List of Parameters
Valid Values
Default Value
VCTRUNK1-VCTRUNK16
UNI
Description
VCTRUNK1-4
VCTRUNK5-16
Default Value
VCTRUNK1-VCTRUNK70
UNI
Description
VCTRUNK1-VCTRUNK66
VCTRUNK67VCTRUNK70
Default Value
VCTRUNK1-VCTRUNK98
UNI
Issue 03 (2012-11-30)
Value
Description
VCTRUNK1-VCTRUNK94
Indicates the VCTRUNKs that can be bound with the VC12xv or VC4-xv.
1117
A List of Parameters
Value
Description
VCTRUNK95VCTRUNK98
Indicates the VCTRUNKs that can be bound with the VC4xv only.
Configuration Guidelines
Principles of binding internal ports (VCTRUNK):
For N1ADQ1 and N1ADL4 boards:
Port Number
Binding Level
Path Number
VCTRUNK1-4
VC4-xv
1-4
VCTRUNK5-16
VC4-xv
5-8
VCTRUNK5-16
VC3-xv
13-24
Port Number
Binding Level
Path Number
VCTRUNK1-66
VC4-xv
1-4
VCTRUNK1-66
VC12-xv
1-63
VCTRUNK67-70
VC4-xv
5-8
Binding Level
Path Number
VCTRUNK1-94
VC4-xv
1-4
VCTRUNK1-94
VC12-xv
1-189
VCTRUNK95-98
VC4-xv
5-8
NOTE
For the IDQ1 or IDL4, if the binding level is VC12-xv, 16 ports at the VC-12 level can be bound. The path
number is allocated from 1 to 63 on the NMS.
For the IDQ1A or IDL4A, if the binding level is VC12-xv, 93 ports at the VC-12 level can be bound. The
path number is allocated from 1 to 189 on the NMS.
1118
A List of Parameters
Values
For N1ADQ1/N1ADL4 boards:
Valid Value
Default Value
VC3-xv, VC4-xv
VC3-xv
Default Value
l VC12-xv
VC12-xv
l VC4-xv
Default Value
VC12-xv, VC4-xv
VC12-xv
Configuration Guidelines
Principles of binding internal ports (VCTRUNK):
For N1ADQ1 and N1ADL4 boards:
Issue 03 (2012-11-30)
Port Number
Binding Level
Path Number
VCTRUNK1-4
VC4-xv
1-4
VCTRUNK5-16
VC4-xv
5-8
1119
A List of Parameters
Port Number
Binding Level
Path Number
VCTRUNK5-16
VC3-xv
13-24
Port Number
Binding Level
Path Number
VCTRUNK1-66
VC4-xv
1-4
VCTRUNK1-66
VC12-xv
1-63
VCTRUNK67-70
VC4-xv
5-8
Binding Level
Path Number
VCTRUNK1-94
VC4-xv
1-4
VCTRUNK1-94
VC12-xv
1-189
VCTRUNK95-98
VC4-xv
5-8
Values
Issue 03 (2012-11-30)
Valid Values
Default Value
Bidirectional
1120
A List of Parameters
Description
Uplink
Downlink
Bidirectional
Configuration Guidelines
An end-to-end connection is bound with VCTRUNKs in both directions, because the end-toend ATM connection is bidirectional.
A multicast connection can be bound with a VCTRUNK in one direction, because the multicast
ATM connection is unidirectional.
Values
For N1ADQ1 and N1ADL4 boards:
Value Range
Default Value
UNI
1121
A List of Parameters
Value
Description
Default Value
UNI
Description
Default Value
UNI
Issue 03 (2012-11-30)
1122
A List of Parameters
Value
Description
Indicates the VCTRUNKs that can be bound with the VC12xv or VC4-xv.
Configuration Guidelines
Principles of binding internal ports (VCTRUNKs):
For N1ADQ1 and N1ADL4 boards:
Port Number
Binding Level
Path Number
VC4-xv
1-4
VC4-xv
5-8
VC3-xv
13-24
Binding Level
Path Number
VC4-xv
1-4
VC12-xv
1-63
VC4-xv
5-8
Issue 03 (2012-11-30)
Port Number
Binding Level
Path Number
VC4-xv
1-4
VC12-xv
1-189
VC4-xv
5-8
1123
A List of Parameters
Values
Valid Value
Default Value
UNI, NNI
UNI
Description
UNI
NNI
Configuration Guidelines
If the number of VPIs received at a port is large (for example, greater than 255), select NNI to
increase the number of VPI bits. As a result, more VPIs are available.
Issue 03 (2012-11-30)
1124
A List of Parameters
Values
Value Range
Default Value
Unit
bits
NOTE
Configuration Guidelines
Max. VPI Bits refers to the number of bits in the valid VPI fields in a cell header that a port
supports. The VPI value and the value of Max. VPI Bits have the following relationship: VPI
= 2Bits-1.
Values
Value Range
Default Value
Unit
6-16
bits
NOTE
Issue 03 (2012-11-30)
1125
A List of Parameters
Configuration Guidelines
Max. VCI Bits refers to the number of bits in the valid VCI fields in the cell header that a port
supports. The VCI value and the value of Max. VCI Bits have the following relationship: VCI
= 2Bits-1.
Values
Value Range
Default Value
Unit
0-4096
224
Piece
NOTE
Configuration Guidelines
None.
1126
A List of Parameters
Values
Value Range
Default Value
Unit
1-8192
3072
Piece
NOTE
Configuration Guidelines
None.
Values
Board Name
Value Range
Default Value
Unit
N1IDQ1A,
N1IDL4A
0-4000
Number
N1ADQ1, N1ADL4,
N1IDQ1, N1IDL4
0-4096
Number
For example, two PVPs have been set up and activated at internal port 1. In this case, the number
of configured VPCs is 2.
Configuration Guidelines
None.
1127
A List of Parameters
Values
Value Range
Default Value
Unit
0-8192
Number
For example, two PVCs have been set up and activated at internal port 1. In this case, the number
of configured VCCs is 2.
Configuration Guidelines
None.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
Unit
32
Number
1128
A List of Parameters
NOTE
Configuration Guidelines
Set this parameter based on the number of VCIs required for each VPI at each port. For example,
if 50 PVCs need to be set up at a port and the VPI is different for each PVC, 50 VPIs are required
to differentiate the PVCs. In this case, set the number of VPIs that support VC label switching
to 50.
Values
Valid Values
Default Value
Enabled, Disabled
Enabled
Issue 03 (2012-11-30)
1129
A List of Parameters
Value
Description
Enabled
Disabled
The flow parameters are invalid, except for PCR of the CBR
service. In this case, the system provides the guaranteed
bandwidth. Moreover, the system grooms the unguaranteed
services based on the service priority.
Configuration Guidelines
The value is not limited. Instead, it is selected according to the networking environment and the
application scenario. Generally, select Enabled for important voice services and signaling
services if the flow needs to be strictly controlled. Select Disabled for connectionless services
if service burst and grooming are allowed.
Values
Valid Value
Default Value
Enable, Disable
Enable
Issue 03 (2012-11-30)
1130
A List of Parameters
Value
Description
Enable
Disable
Configuration Guidelines
Select a proper value according to the networking environment and application scenario.
Generally, set this parameter to Enable for services that require strict flow control. Set this
parameter to Disable for services that allow service burst and grooming.
Values
Value
Range
Default
Value
1-2048
Description
This parameter is related to the number of created connections.
When a connection is created, a traffic is added to the traffic list.
To set up a connection, you can either select a created connection
from the ATM traffic table, or input the traffic ID for the created
connection.
Configuration Guidelines
Select a traffic type required for the connection from the ATM traffic table.
Issue 03 (2012-11-30)
1131
A List of Parameters
Values
The following table lists the available traffic types for each service type.
Service Type
Traffic Type
Default Value
CBR
l NoClpNoScr
NoClpNoScr
l ClpNoTaggingNoScr
l ClpTaggingNoScr
l ClpTransparentNoScr
l NoClpNoScrCdvt
rt-VBR
l ClpTransparentScr
ClpTransparentScr
l NoClpScrCdvt
l ClpNoTaggingScrCdvt
l ClpTaggingScrCdvt
nrt-VBR
l NoClpScr
NoClpScr
l ClpNoTaggingScr
l ClpTaggingScr
UBR
l NoTrafficDescriptor
NoTrafficDescriptor
l NoClpNoScr
l NoClpTaggingNoScr
l NoClpNoScrCdvt
Issue 03 (2012-11-30)
1132
A List of Parameters
Service Type
Traffic Type
Default Value
UBR+
l NoTrafficDescriptorMcr
NoTrafficDescriptorMcr
l NoClpMcr
l NoClpMcrCdvt
Configuration Guidelines
For CBR services:
Traffic
Type
CDVT
Process
ing
Cells
with
CLP of 0
and 1 in
Differe
nt Ways
Labelin
g Cells
PCR
MCR
MBS
SCR
NoClpN
oScr
Not
supporte
d
No
No
Not
supporte
d
Not
supporte
d
Not
supporte
d
Not
supporte
d
ClpNoT
aggingN
oScr
Not
supporte
d
Yes
No
Not
supporte
d
Not
supporte
d
Not
supporte
d
Not
supporte
d
ClpTagg
ingNoSc
r
Not
supporte
d
Yes
Yes
Not
supporte
d
Not
supporte
d
Not
supporte
d
Not
supporte
d
ClpTran
sparentN
oScr
Supporte
d
No
No
Not
supporte
d
Not
supporte
d
Not
supporte
d
Not
supporte
d
NoClpN
oScrCdv
t
Supporte
d
No
No
Not
supporte
d
Not
supporte
d
Not
supporte
d
Not
supporte
d
Issue 03 (2012-11-30)
1133
A List of Parameters
Traffic
Type
CDVT
Process
ing
Cells
with
CLP of 0
and 1 in
Differe
nt Ways
Labelin
g Cells
PCR
MCR
MBS
SCR
ClpTrans
parentSc
r
Supporte
d
No
No
Supporte
d
Not
supporte
d
Supporte
d
Supporte
d
NoClpSc
rCdvt
Supporte
d
No
No
Supporte
d
Not
supporte
d
Supporte
d
Supporte
d
ClpNoTa
ggingScr
Cdvt
Supporte
d
Yes
No
Supporte
d
Not
supporte
d
Supporte
d
Supporte
d
ClpTaggingScrC
dvt
Supporte
d
Yes
Yes
Supporte
d
Not
supporte
d
Supporte
d
Supporte
d
CDVT
Process
ing
Cells
with
CLP of 0
and 1 in
Differe
nt Ways
Labelin
g Cells
PCR
MCR
MBS
SCR
NoClpSc
r
Not
supporte
d
No
No
Supporte
d
Not
supporte
d
Supporte
d
Supporte
d
ClpNoTa
ggingScr
Not
supporte
d
Yes
No
Supporte
d
Not
supporte
d
Supporte
d
Supporte
d
ClpTaggi
ngScr
Not
supporte
d
Yes
Yes
Supporte
d
Not
supporte
d
Supporte
d
Supporte
d
Issue 03 (2012-11-30)
1134
A List of Parameters
Traffic
Type
CDVT
Processing
Cells with
CLP of 0 and
1 in
Different
Ways
Labeli
ng
Cells
PCR
MCR
MBS
SCR
NoTraff
icDescri
ptor
Not
supporte
d
No
No
Not
supporte
d
Not
supporte
d
Not
supporte
d
Not
supporte
d
NoClpN
oScr
Not
supporte
d
No
No
Support
ed
Not
supporte
d
Not
supporte
d
Not
supporte
d
NoClpT
aggingN
oScr
Support
ed
Yes
Yes
Support
ed
Not
supporte
d
Not
supporte
d
Not
supporte
d
NoClpN
oScrCd
vt
Support
ed
No
No
Support
ed
Not
supporte
d
Not
supporte
d
Not
supporte
d
CDVT
Labeli
ng
Cells
Processin
g Cells
with CLP
of 0 and 1
in
Different
Ways
PCR
hether
to
Suppor
t MCR
MBS
SCR
NoTraffi
cDescrip
torMcr
Not
supporte
d
No
No
Not
supporte
d
Support
ed
Not
supporte
d
Not
supporte
d
NoClpM
cr
Not
supporte
d
No
No
Support
ed
Support
ed
Not
supporte
d
Not
supporte
d
NoClpM
crCdvt
Support
ed
No
No
Support
ed
Support
ed
Not
supporte
d
Not
supporte
d
Note: Cell labeling means that setting the cell loss priority (CLP) of the cells that do not meet
the traffic parameters to 1. In case of network congestion, the cells with CLP of 1 are first
discarded.
Select a service type based on the network type and service configurations.
Issue 03 (2012-11-30)
1135
A List of Parameters
NOTE
Values
Valid Value
Default Value
NOTE
Description
CBR
Issue 03 (2012-11-30)
1136
Value
Description
rt-VBR
A List of Parameters
The rt-VBR service is sensitive to the delay and delay variation of the
data flow. The voice and interactive video services are the typical
applications, which are similar to the CBR service. Some burst events
are allowed for the rt-VBR service. At different time segments, the data
rate may be different at the source end. Moreover, static bandwidth is not
allowed for the rt-VBR service on the network side (namely, the service
provider). Instead, the rt-VBR service runs in statistical multiplexing
mode. When applying for the rt-VBR service from the network side, you
must specify the parameters such as peak cell rate (PCR), sustainable cell
rate (SCR), maximum burst size (MBS), and cell delay variation
tolerance (CDVT).
Indicates the non-real-time variable bit rate.
nrt-VBR
Compared with the rt-VBR service, the nrt-VBR service has lower
requirements for the real-time feature and it has lower priority than the
rt-VBR service when service data is processed on the network side. The
other features, such as burstness, statistical multiplexing, and service
parameters, are almost the same as those of the rt-VBR service.
Indicates the unspecified bit rate.
UBR
UBR+
Configuration Guidelines
Each service has its application scenario. Moreover, the service may be affected because of the
network environment and the interconnected equipment. The following table lists application
scenarios for each service type.
Issue 03 (2012-11-30)
1137
A List of Parameters
ATM Service
Service Category
Service Type
Application
CBR
Category A
Voice service
Conference call
Audio transmission
(broadcast)
Audio library
Voice mails
Video
Video conference
call
Video transmission
Video on demand
rt-VBR
Category B
Voice service
Voice mails
Conference calls
based on voice
packets
Video
nrt-VBR
Category C
Data
Air interface
reservation
Banking service
Frame relay network
interconnection
UBR
Category C or D
Data
E-mail service
File transmission
Database browse
Remote terminal
access
UBR+
Category C or D
Data
E-mail service
File transmission
Database browsing
Remote terminal
access
Issue 03 (2012-11-30)
1138
A List of Parameters
Values
For N1IDQ1A and N1IDL4A boards:
Value Range
Default Value
Unit
5-1412828
90
cells/s
Default Value
Unit
90-1412828
90
Cells/s
Configuration Guidelines
Generally, set PCR according to service configurations on the interconnected equipment.
Different services are used in different scenarios. The details are as follows:
l
For CBR services, PCR equals the reserved bandwidth. If the UPC/NPC function is enabled,
the traffic is controlled according to the PCR value. That is, the excessive packets are
discarded.
For VBR services, if the UPC/NPC function is enabled, excessive packets are discarded
when the service rate exceeds the PCR. If the UPC/NPC function is disabled, this parameter
is meaningless. That is, the traffic rate can exceed the specified PCR. In such a case, PCR
is set only for the reserved bandwidth computation.
For UBR services, the bandwidth depends on the available physical bandwidth of a port.
For UBR services, PCR is set to determine the maximum rate only.
Issue 03 (2012-11-30)
1139
A List of Parameters
Values
For N1IDQ1A and N1IDL4A boards:
Value Range
Default Value
Unit
5-1412828
90
cells/s
Value Range
Default Value
Unit
90-1412828
90
cells/s
Configuration Guidelines
Generally, set SCR based on service configurations of the interconnected equipment. Different
services are deployed in different scenarios. Details are as follows:
l
Issue 03 (2012-11-30)
1140
A List of Parameters
A.15.20 MBS
Description
The MBS parameter specifies the maximum number of cells that can be transmitted over the
connection and that burst continuously based on the peak cell rate. This parameter is set to
increase the capability of a system to respond to transient cell burst events, and to enhance the
anti-jitter performance of a service.
Values
Value Range
Default Value
Unit
Cells
Configuration Guidelines
Generally, set this parameter to a large value to prevent cells from being discarded in case of a
transient burst event. The recommended value is 100000.
Related Information
If the UPC/NPC function is enabled, this parameter allows transient burst of a service. The PCR
is similar to a container at constant size. The MBS and cell delay variation tolerance (CDVT)
help to increase the container size. When the transient traffic entering the container increases,
the increased size of the container takes effect. If the transient traffic keeps increasing, an
overflow occurs.
l
If this parameter is set for the opposite equipment, set it to the same value for the local
equipment.
If this parameter is not set for the opposite equipment, set it to the integer part of {1+L/(1/
SCR-1/PCR) } for the local equipment. Letter L indicates the CDVT, representing the
maximum tolerance range for a cell to reach the destination in advance if MBS is set to an
expected value.
Issue 03 (2012-11-30)
1141
A List of Parameters
A.15.21 CDVT
Description
The CDVT parameter specifies the upper threshold of the cell sending interval. This parameter
is set to increase the capability of a system to respond to transient cell burst events, and to enhance
the anti-jitter performance of a service.
If this parameter is set to a small value, certain cells may be discarded in case of a service
burst event.
If this parameter is set to a large value, the system is not affected. Generally, set it to the
maximum value.
Values
Value Range
Default Value
Unit
0.7
ms
Configuration Guidelines
In most cases, set it to the maximum value.
Related Information
Fix the PCR value of the level-1 leaky bucket, and estimate the CDVT value. After passing the
level-1 leaky buckets, the data flow is of burst nature, and the average rate is the PCR. Generally,
the data traffic is transmitted at intervals of PCR, 0, PCR, 0, and so on. The CDVT value needs
to be set to meet the following condition:
CDVT<1-424*(MBS-1) (1/SCR-1/PCR)
(SCR and PCR are expressed in bit/s. 424 bits / cell = 53 bytes/cell*8 bits/byte)
1142
A List of Parameters
are encapsulated into one AAL5 frame to provide functions (for example, signaling and routing)
of the upper-layer services. If the AAL5 service bandwidth exceeds the bandwidth on the port
or the specified bandwidth, excessive packets are discarded according to the AAL5 data frame.
One AAL5 data frame may contain multiple ATM cells.
Values
Valid Values
Default Value
Yes, No
No
Configuration Guidelines
Select Yes or No based on the encapsulation type at the ATM adaptation layer. Generally, select
No.
Enable this flag only when the service is at the AAL5 layer.
If any service at other adaptation layers is available, do not enable this flag.
1143
A List of Parameters
Values
Value Range
Default Value
1-2048
Configuration Guidelines
This parameter depends on the created flow. When a flow is created, it is added to the flow table.
To set up a connection, you can select a created flow from the flow table, or enter the ID of the
created flow.
Values
For IDQ1A/IDL4A boards:
Value Range
Default Value
1-32768
Default Value
1-8192
Configuration Guidelines
Set this parameter to a value within the value range.
1144
A List of Parameters
Values
Valid Value
Default Value
PVP, PVC
PVC
Description
PVP
Each PVP transmits all services marked with VCI with the
same VPI over the connection.
PVC
Configuration Guidelines
To converge services of the same VPI at a port, select PVP.
To transmit services marked with unique VPI and VCI only, select PVC.
As shown in Figure A-21, PVC and PVP are both virtual transmission paths. PVC is, however,
contained in PVP. A PVC is a channel, but a PVP is a path. PVCs in the PVP are used to transmit
services.
Figure A-21 Relation between PVC and PVP
Issue 03 (2012-11-30)
1145
A List of Parameters
Values
Board Name
Valid Value
Default Value
N1IDQ1A, N1IDL4A
p2p
p2p
p2p
Description
p2p
p2mpRoot
IDQ1
VCTRUNK
VCTRUNK
MAC Port
VCTRUNK
VCTRUNK
VCTRUNK
Issue 03 (2012-11-30)
1146
A List of Parameters
Value
Description
p2mpLeaf
Provides leaf connections for p2mp multicast services. The leaf nodes
are multicast over the root connection, for example, the blue lines shown
in the figure.
Configuration Guidelines
The p2p connection is used for communication from a segment to an end, for example, phone
calls and Internet access services. The p2mpRoot and p2mpLeaf connections are used for
multicast communication, for example, IPTV services and broadcast services.
A unicast service is a bidirectional service, but a multicast service is a unidirectional service.
The application scenarios are respectively described in Figure A-22and Figure A-23.
Figure A-22 Multicast service
Issue 03 (2012-11-30)
1147
A List of Parameters
Related Information
To set up a multicast connection, set up a root connection for multicast services, and then set up
multiple leaf connections that have the same source but different sinks. These leaf connections
are multicast to different ports (spatial multicast) or the same port (logical multicast). A multicast
service is a unidirectional service.
Issue 03 (2012-11-30)
1148
A List of Parameters
VPI in
76
VPC
ATM
Node1
VPI=69
VCI=34,48
VPL
VPI out
53
VPI=53
VCI=19,26,39
VPL
VPI=76
VCI=19,26,39
ATM
Node3
VPI in
53
VPI out
76
IN
OUT
VPI-69 VPI-77
VCI-34 VCI-22
VCI-48 VCI-26
VCC
VPI=77
VCI=22,26
ATM
Node2
VPI in
77
22
VPI out
31
33
VPI=31
The sink VPI/VCI of ATM Node1 must be the same as the source VPI/VCI of ATM Node2.
(VPI = 77, VCI = 22, 26). Otherwise, ATM Node2 may fail to identify the data (VPI in the cell
header is not 53. Alternatively, VCI in the cell header is not 22 or 26) received from ATM Node1.
ATM Node2 can identify the cells whose VPI is 77 and whose VCI is 22 or 26 only. In other
words, a service between different nodes is transmitted on the basis of the same VPI and VCI.
Values
Issue 03 (2012-11-30)
Parameter
Value
Value
Range
Default
Value
VPI
0-4095
Description
Switches VPI labels between nodes. Note that
values 0-4095 are subject to the maximum
configuration. By default, set this parameter to a
value ranging from 0 to 255. You can adjust the
value range by setting Port Type and Max. VPI
Bits. Port resources are specified. If this parameter
is set to a large value for a port, set it to a small
value for other ports. For the configuration method,
refer to Relationship with Other Parameters in this
topic.
1149
A List of Parameters
Parameter
Value
Value
Range
Default
Value
VCI
32-65535
Description
Switches VCI labels between nodes. Note that
values 32-65535 are subject to the maximum
configuration. By default, set this parameter to a
value ranging from 32 to 127 (Max. VCI Bits is 7
bits). You can adjust the value range by setting
Max. VCI Bits. Port resources are specified. If this
parameter is set to a large value for a port, set it to
a small value for other ports. For the configuration
method, refer to Relationship with Other
Parameters in this topic.
NOTE
For the N1IDQ1A or N1IDL4A board, the value range of the VPI parameter is not determined by setting
the Max. VPI Bits parameter. Instead, the actual value range of the VPI parameter is determined by setting
it to a specific value range (for example, from 1 to 1000). The value of the VCI parameter ranges from 32
to 65535, and this value range cannot be modified.
Configuration Guidelines
This parameter identifies connections only regardless of requirements. Note that the specified
VPI must be consistent with the VPI of the downstream or upstream service on the node to ensure
that the labels are switched correctly in the case of multipoint transmission. An example is shown
in Impact on the System in this topic.
If Port Type is set to UNI, Max. VPI Bits of the port can be set to 8 only. In this case, set
VPI to a value ranging from 0 to 255.
If Port Type is set to NNI, Max. VPI Bits of the port can be set to 12. In this case, set
VPI to a value ranging from 0 to 4095.
The VCI value range depends on Max. VCI Bits of the port.
l
If the value of Max. VCI Bits is increased, set VCI to a larger value range.
If Max. VCI Bits is set to 16, set VCI to a value ranging from 32 to 65535.
Issue 03 (2012-11-30)
1150
A List of Parameters
Values
Valid Value
Default Value
Up, Down
Up
Description
Up
Down
Configuration Guidelines
There are no specific principles for setting this parameter because it is used for querying.
Values
Valid Value
Default Value
OK
Issue 03 (2012-11-30)
Value
Description
OK
1151
A List of Parameters
Value
Description
CCLOC
AIS
LCD
Configuration Guidelines
There are no specific principles for setting this parameter because this parameter is used for
querying.
Values
Valid Value
Default Value
Idle
Issue 03 (2012-11-30)
Value
Description
Idle
Indicates that the working channel and the protection pair are normal.
Protection Route
Signal Fail
Working Route SF
Indicates that the protection channel fails but switching does not occur.
Force to Standby
Indicates that services are manually switched from the working channel
to the protection channel when both channels are normal.
1152
A List of Parameters
Value
Description
Non-Revertive
Wait-To-Restore
Manual to Active
Indicates that services are switched from the protection channel to the
working channel.
Manual to Standby
Indicates that services are switched from the working channel to the
protection channel.
Freeze
Lockout
Configuration Guidelines
This parameter is used for querying.
Values
For IDQ1A/IDL4A boards:
Valid Value
Default Value
1:1
1:1
Issue 03 (2012-11-30)
Valid Value
Default Value
1+1, 1:1
1+1
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd.
1153
A List of Parameters
Configuration Guidelines
Select a value according to the protection type specified by users.
Related Information
1+1 protection: Uses the dual-fed and selective receiving mechanism.
Figure A-25 shows the normal status.
Figure A-25 Normal status
Working entity
Service data
Protection entity
Working entity
Service data
Protection entity
Issue 03 (2012-11-30)
Working entity
Protection entity
1154
A List of Parameters
Working entity
Working service data
Extra service data
Protection entity
Values
Valid Value
Default Value
Sink
Configuration Guidelines
l
If two P2P ATM connections have the same source but different sinks, select Sink.
If two P2P ATM connections have the same sink but different sources, select Source.
If two P2P ATM connections have different sources and sinks, select Source + Sink.
1155
A List of Parameters
Values
Valid Value
Default Value
Description
Stop
Normal
Switch
Configuration Guidelines
None.
Values
Valid Value
Default Value
Single-Ended
Configuration Guidelines
Select a proper value according to the protection type specified by users.
Issue 03 (2012-11-30)
1156
A List of Parameters
Related Information
The 1+1 protection is used as an example to describe differences between single-ended
protection and dual-ended protection.
In single-ended protection, if the working link fails in one direction, services are switched from
the working link to the protection link only in this direction. Switching does not occur in another
direction.
Figure A-29 shows the normal status.
Figure A-29 Normal status
Working entity
Service data
Protection entity
Working entity
Service data
Protection entity
Dual-ended protection means that a switching event occurs automatically in another direction
if it occurs in one direction. In this case, a bidirectional service is completely switched to the
protection link.
Figure A-31 shows the normal status.
Figure A-31 Normal status
Issue 03 (2012-11-30)
Working entity
Protection entity
1157
A List of Parameters
Working entity
Working service data
Extra service data
Protection entity
Values
Valid Value
Default Value
Revertive, Non-Revertive
Revertive
Configuration Guidelines
Select a proper value, depending on whether the ATM service needs to be switched to the original
working link.
1158
A List of Parameters
Values
Value Range
Default Value
Unit
100
ms
Configuration Guidelines
Select a proper value according to requirements.
Values
Value Range
Default Value
Unit
12
min
Configuration Guidelines
Set this parameter properly based on the actual network conditions.
1159
A List of Parameters
Values
Value Range
Default Value
Clear
Description
Clear
Freeze
Lockout of Protection
Force to Standby
Manual to Active
Manual to Standby
Configuration Guidelines
Select the value according to the switching operation to be performed.
Issue 03 (2012-11-30)
1160
A List of Parameters
Values
IDQ1A/IDL4A:
Value Range
Default Value
1-93
IDQ1/IDL4:
Value Range
Default Value
1-32
Configuration Guidelines
When you create IMA groups, the IMA group number is automatically allocated by the
U2000. The number of IMA groups for the IDQ1 or IDL4 cannot be more than 16, and the
number of IMA groups for the IDQ1A or IDL4A cannot be more than 93.
1161
A List of Parameters
Values
Valid Values
Default Value
1.0, 1.1
1.1
Configuration Guidelines
The IMA protocol version for the interconnected equipment should be consistent at the two ends.
Values
Valid Values
Default Value
128
Configuration Guidelines
IMA Transmit Frame Length should be consistent with that of the interconnected board.
1162
A List of Parameters
Mode and Symmetrical Operation, Symmetrical Mode and Asymmetrical Operation and
Asymmetrical Mode and Asymmetrical Operation.
Values
Valid Values
Default Value
Description
Configuration Guidelines
Select the value according to the following application scenarios.
Figure A-33 shows the asymmetrical mode and symmetrical operation.
Issue 03 (2012-11-30)
1163
A List of Parameters
Issue 03 (2012-11-30)
1164
A List of Parameters
If the two directions of an E1 link in the IMA group are in both transmit or both receive directions,
the IMA group fails to negotiate.
Values
Valid Values
Default Value
0-32
Configuration Guidelines
Set this parameter according to the minimum number of active links supported by the opposite
equipment.
Alternatively, set the value according to the actual capability of the services. For example, there
are 10 active links, and the total bandwidth is 20 Mbit/s. In this case, the interconnected
Issue 03 (2012-11-30)
1165
A List of Parameters
equipment provides the bandwidth of at least 15 Mbit/s. Then set Minimum Number of Active
Links to 8. If the number of links is less than 8, the bandwidth is less than 15 Mbit/s. In this
case, the services of the IMA group are interrupted.
Values
Valid Values
Default Value
Issue 03 (2012-11-30)
Value
Description
Not Configured
Start-Up
Start-Up-Ack
Config-Aborted
Insufficient-Links
Blocked
Operational
1166
A List of Parameters
Configuration Guidelines
None.
Values
Value Range
Default Value
ITUT Mode
Description
ITUT Mode
European Mode
Configuration Guidelines
The IMA 1.0 management mode cannot interwork with the IMA 1.1 management mode.
Ensure that the configurations of the two ends of an IMA group are the same when you select
the IMA 1.0 management mode. In addition, IMA Transmit Frame Length must be set to 128;
IMA Group Configuration Mode must be set to Symmetrical Mode And Symmetrical
Operation.
Issue 03 (2012-11-30)
1167
A List of Parameters
Values
Valid Values
Default Value
Enabled, Disabled
Enabled
Configuration Guidelines
Select the value according to the equipment at the opposite end. Generally, select Enabled.
Values
Valid Values
Default Value
E1 CRC-4 Multiframe
Issue 03 (2012-11-30)
1168
A List of Parameters
Value
Description
E1 Dual Frame
E1 CRC-4 Multiframe
Configuration Guidelines
Currently, E1 dual frames are almost not used. Instead, E1 CRC-4 multiframes are used in most
cases. For this reason, select E1 CRC-4 Multiframe.
Forward
Backward
Issue 03 (2012-11-30)
1169
A List of Parameters
Values
Valid Values
Default Value
Forward, Backward
Description
Forward
Indicates the direction from the source port to the sink port.
Backward
Indicates the direction from the sink port to the source port.
Configuration Guidelines
None.
Values
Valid Values
Default Value
Issue 03 (2012-11-30)
Value
Description
1170
A List of Parameters
Value
Description
Segment point
Endpoint
Configuration Guidelines
According to different OAM maintenance segments, as shown in Figure A-37, set the segment
and end point attribute properly.
Figure A-37 Schematic diagram of the maintenance segment
end
seg
inner
inner
seg+end
segment
user1
end to end
user2
To maintain the entire ATM connection, select Endpoint for the two ends of the connection.
To maintain one segment of the entire ATM connection, select Segment point for the two
ends of the segment to be maintained.
If the endpoint overlaps the segment point, select Segment and Endpoint.
CAUTION
l For a protection link that is added into the 1+1 source or 1+1 sink protection group, do not
select Segment and Endpoint. Moreover, the CC function cannot be enabled.
l For a connection that is added into the protection group, do not select Segment.
1171
A List of Parameters
A.15.50 LLID
Description
The LLID parameter specifies the loopback location. It contains 15 bytes.
Values
Field
Valid Values
Default Value
Description
Country Code
(Hexadecimal Code)
2 bytes
[2 bytes] 0000
Network Code
(Hexadecimal Code)
2 bytes
[2 bytes] 0000
NE Code
(Hexadecimal Code)
11 bytes
Configuration Guidelines
Generally, select the default value.
Related Information
Loopback Test:
1.
2.
For the segment LB test, the loopback point may be the intermediate point or the segment
point. Regardless of the destination LLID, the seg-LB cells are captured at the segment
point. For this reason, the seg_LB test cannot be carried out across different segment points.
3.
For the end-to-end LB test, the loopback point must be the endpoint rather than the middle
point. Likewise, regardless of the destination LLID, the e-t-e_LB cells are captured at the
endpoint. For this reason, the e-t-e_LB test cannot also be carried out across different
segment points.
LLID:
Indicates the coding mode of LLID. In practice, the APC LLID can be set to any value. For the
Qx or NE-board interface, the LLID can contain 16 bytes. For the U2000 server and client, the
window is designed according to the second codiong mode (0x01) specified in ITU-T I.610. For
this reason, you can enter 15 bytes, which consist of 2-byte country code (the default value is
0000 in the case of hexadecimal system), 2-byte network code (the default value is 0000 in the
case of BCD code pattern), and 11-byte NE code (by default, the first four bytes indicate the NE
ID. Enter 0 for the last seven bytes). If you set the LLID, and if the NE code is different from
the NE ID, the LLID does not change although the NE ID is changed.
Issue 03 (2012-11-30)
1172
A List of Parameters
Values
Value Range
Default Value
4528-1412828 cell/s
Configuration Guidelines
The maximum ingress bandwidth of each external optical port on the N1IDQ1 and N1ADQ1
boards is 353207 cell/s. The maximum ingress bandwidth of each external optical port on the
N1IDL4 and N1ADL4 boards is 1412828 (353207x4) cell/s.
For internal ATM ports, the maximum ingress bandwidth depends on the type of the bound
services and number of the services. If VCTRUNK1 is bound with two VC-12s, the maximum
ingress bandwidth is 9056 (4528x2) cells/s. If VCTRUNK1 is bound with four VC-4s, the
maximum ingress bandwidth is 1412828 (353207x4) cell/s.
Plan properly before configuring or using a board to ensure that the ingress bandwidth of each
port does not exceed the permitted value range.
1173
A List of Parameters
Values
Value Range
Default Value
4528-1412828 cell/s
Configuration Guidelines
The maximum egress bandwidth of each external optical port on the N1IDQ1 and N1ADQ1
boards is 353207 cell/s. The maximum egress bandwidth of each external optical port on the
N1IDL4 and N1ADL4 boards is 1412828 (353207x4) cell/s.
For internal ATM ports, the maximum ingress bandwidth depends on the type of the bound
services and number of the services. If VCTRUNK1 is bound with two VC-12s, the maximum
egress bandwidth is 9056 (4528x2) cells/s. If VCTRUNK1 is bound with four VC-4s, the
maximum egress bandwidth is 1412828 (353207x4) cell/s.
Plan properly before configuring or using a board to ensure that the egress bandwidth of each
port does not exceed the permitted value range.
Values
Value Range
Default Value
1-255
Configuration Guidelines
In the same RPR, each node must have a unique ID. IDs of nodes in different RPRs can be
allocated separately.
When planning a network, you need to uniformly plan IDs for nodes in the RPR. For a new
project, node IDs increase in an ascending order in the direction of ring 0.
Issue 03 (2012-11-30)
1174
A List of Parameters
After you configure services, node IDs cannot be modified. Otherwise, the following results
may be caused:
l
Some packets in a service that is forwarded at the local node are lost for up to 50 ms.
The service whose destination node is the local node may be unavailable. When this occurs,
you need to reconfigure the service according to the new ID of the local node.
Values
Valid Value
Default Value
Enabled, Disabled
Disabled
Issue 03 (2012-11-30)
Value
Description
Enabled
Disabled
1175
A List of Parameters
Configuration Guidelines
l
To configure a RPR network, enable the RPR protocol for all the nodes in the RPR.
For a node that adds or drops a service, you must enable the RPR protocol.
For a node that does not add or drop a service at the moment, generally, you also need to
enable the RPR protocol.
If a node with the RPR protocol disabled needs to add or drop services, enabling the RPR protocol
at this moment may result in a service interruption for 50 ms. If a node does not need to add or
drop a service, disable the RPR protocol to shorten the time for the node to learn the topology
and to minimize service interruption time due to a change in the RPR network topology
accordingly.
If some services are available in the RPR network, enable or disable the RPR protocol on the
node only when you make sure that this operation does not affect any service in the network.
Values
Board Name
Value Range
Default Value
Unit
N1EMR0
1000
ms
100
ms
Issue 03 (2012-11-30)
1176
A List of Parameters
Configuration Guidelines
For the slow timer on each node, the default value is the same. Generally, use the default value.
If any requirements are proposed by customers, set the slow timer to a proper value, depending
on the board software version. The value should not exceed, however, the range allowed in
practice.
Values
Value Range
Default Value
Unit
Second
Configuration Guidelines
Generally, use the default value. If any requirement is proposed by customers, you can set ATD
Timer Value to a proper value based on the RPR network status. The value of ATD Timer
Value, however, cannot exceed the range allowed in practice.
1177
A List of Parameters
available to protect services in the RPR network. For details, see "Related Information" in this
topic.
Values
Valid Value
Default Value
Steering
Description
Steering
Changes the service route to protect the RPR network. In this case,
less bandwidth is used, but more time is taken to respond, and more
packets are discarded in case of a switching event.
Wrapping
Switches the loop to protect services in the RPR network. In this case,
less time is taken to respond, fewer packets are discarded in case of
a switching event, but some bandwidth is wasted.
Switches the loop and changes the service route to protect services
in the RPR network. This mode integrates the advantages such as
short response time in Wrapping mode and optimized transmission
path in Steering mode.
Configuration Guidelines
Configure a proper protection mode for every node in the RPR network based on the service
characteristics and actual requirements. The protection modes for all the nodes in the same RPR
network must be compatible with each other. If not, an alarm is generated and services are not
protected in the way as expected by customers. Use the default setting, unless otherwise
specified.
You can set this parameter based on the compatibility of the protection mode. The Wrapping
mode is compatible with the Wrap and Steering mode. The Wrapping mode or the Wrap and
Steering mode, however, is incompatible with the Steering mode. For example, you can set
Protection Mode to Steering or Wrapping for all the nodes in the RPR network. You can also
set Protection Mode to Wrapping for some nodes, and to Wrap and Steering for other nodes.
You can also set Protection Mode to Steering for some nodes, and to Wrapping or Wrap and
Steering for other nodes.
Issue 03 (2012-11-30)
1178
A List of Parameters
Related Information
Steering: Steers the loop to protect services in the RPR network.
This mode is the default mode for all the nodes. If a part of the RPR is faulty, the information
that contains faulty point and fault type is transmitted to each node. The topology is changed
accordingly. The source node only needs to directly transmit the data to the destination node
based on the new topology. The data that has been transmitted to the faulty point is discarded at
the point. In Steering mode, the bandwidth usage is improved to transmit data over an optimal
route, but the switching time is long. After the topology becomes stable, you need to determine
a new route based on the new topology.
Figure A-38 and Figure A-39 respectively shows the network status before a fiber cut and after
the Steering mode is adopted for protection.
Figure A-38 Network before a fiber cut
Node 2 transmits packets to node 6. Normally, the service flow is in the direction of s2 -> s3 > s4 -> s5 -> s6. If the fiber between nodes 3 and 4 is cut, the topology is updated to optimize
the route. In this case, the service flow is in the direction of s2 -> s1 -> s7 -> s6.
Wrapping: Wraps the route to protect services in the RPR network.
This mode is optional. If a point on the RPR ring is faulty, the node close to the faulty point
automatically loops back to connect ring 0 with ring 1. In Wrapping mode, the switching time
is short to minimize the frame loss resulted from the fault. The bandwidth usage, however, is
low.
Figure A-40 and Figure A-41 respectively shows the network status before a fiber cut and after
the Wrapping mode is adopted for protection.
Issue 03 (2012-11-30)
1179
A List of Parameters
Node 2 transmits packets to node 6. Normally, the service flow is in the direction of s2 -> s3 > s4 -> s5 -> s6. If the fiber between nodes 3 and 4 is cut, it is switched on nodes 3 and 4. In this
case, the service flow is in the direction of s2 -> s3 -> s2 -> s1 -> s7-> s6 -> s5 -> s4 -> s5 ->
s6.
Wrap and Steering: Wraps and then steers the route for protection.
Wraps and then steers the route for protection. This mode integrates the advantages such as the
short response time in the Wrapping mode and the optimal route in the Steering mode. In Wrap
and Steering mode, the route is wrapped to avoid loss of more packets, and then the route is
steered for protection after the new topology becomes stable.
After the time specified in Hold-off Time expires, switching occurs in the RPR network
if the SF or SD condition persists.
Switching does not occur in the RPR network if the SF or SD condition is cleared.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
Unit
ms
1180
A List of Parameters
Configuration Guidelines
l
If no SDH protection scheme (for example, MSP or SNCP) is configured, use the default
value to avoid loss of more packets within the hold-off time in the RPR network.
If any SDH protection scheme (for example, MSP or SNCP) is configured, make sure that
the hold-off time is longer than the time for all the SDH protection switching. For example,
the hold-off time is set to 50 ms. In this case, the SDH protection schemes have the priority
to be implemented. If the SDH protection schemes fail, the RPR switching is implemented
for protection.
If the service in the RPR network is not switched, or if the forced switching (FS) or manual
switching (MS) occurs, the system operation is not affected after you modify the setting.
Values
Valid Value
Default Value
Enabled, Disabled
Enabled
Issue 03 (2012-11-30)
Value
Description
Enabled
If the working path recovers after service switching, the node changes to the
normal state and the service is switched back to the working path.
1181
A List of Parameters
Value
Description
Disabled
If the working path recovers after service switching, the node does not change
its state. Instead, it keeps in the WTR state all the time and the service is not
switched back to the working path.
Configuration Guidelines
The node changes to the WTR state only when it recovers from the SD or SF condition. For this
reason, this parameter takes effect only when the node recovers from the SD or SF condition,
but does not take effect when the node recovers from the FS or MS condition.
The default value is Enabled. Use the default value, unless otherwise specified. In this case,
although some packets are lost transiently when the link recovers, less bandwidth is used and
service forwarding efficiency is improved.
Values
Value Range
Default Value
Unit
10
Second
Configuration Guidelines
Generally, use the default value. If any requirements are proposed by customers, you can set
Protection Wait-to-restore to a proper value according to the requirements and RPR network
status. The value of Protection Wait-to-restore cannot exceed the range allowed in practice.
Issue 03 (2012-11-30)
1182
A List of Parameters
Values
Value Range
Default Value
1-255
NOTE
Configuration Guidelines
Use the default value, unless otherwise specified. That is, if the weight is the same for each node,
the bandwidth of the service in the RPR network is consistent. Otherwise, set the weight value
based on the required bandwidth. The wight values can be separately set for 0 ring and 1 ring.
1183
A List of Parameters
Applica
tion
SubPriority
Bandwi
dth
Guarant
eed or
Not
Jitter
Bandwid
th Type
Bandwid
th SubType
Adopt
the
Fairne
ss
Algor
ithm
or Not
Realtime
service
A0
Yes
Low
Preallocated
Reserved
No
A1
Yes
Low
Nearreal-time
service
B-CIR
Yes
Middle
Preallocated
Reclaimab
le
B-EIR
No
Wider
Used
randomly
Reclaimab
le
Besteffort
delivery
Service Quality
Yes
The bandwidth is used for the RPR node whose service priority is A. It limits the traffic of the
services whose priority is A in the loop. If the traffic of priority A has the bandwidth that exceeds
the A0 bandwidth, they are transmitted by the A1 bandwidth. If the traffic of priority A has the
bandwidth that exceeds the A1 bandwidth, it is discarded.
Values
The parameter values for A0 and A1 bandwidth are the same and are as follows:
Issue 03 (2012-11-30)
1184
A List of Parameters
Value Range
Default Value
Unit
Mbps
Configuration Guidelines
The A0 bandwidth can be set within the bandwidth range in the RPR network, but cannot exceed
the bandwidth of the services whose priority is A. The proportion of the total A0 bandwidth
cannot be extremely large in the RPR network. Otherwise, the bandwidth utilization is decreased.
If no service of priority A is available in the RPR network, generally, set the bandwidth to 0 for
the service of priority A. If any service of priority A is available, generally, set the A bandwidth
to 0, and set the bandwidth to the A1 bandwidth for all the services of priority A.
Values
Value Range
Default Value
Unit
Mbps
Configuration Guidelines
You can plan the B_CIR bandwidth for each node in the RPR network based on the transmitted
near-real-time service traffic. If no service of priority B is available to the nodes in the RPR
network, generally, set the B_CIR bandwidth to 0. The B-CIR bandwidth should not exceed the
total bandwidth in the RPR network.
1185
A List of Parameters
0 Ring
S2
1 Ring
RPR Network
S0
S3
S5
S4
OptiX NE
Values
Value Range
Default Value
0 Ring, 1 Ring
Issue 03 (2012-11-30)
Value
Description
0 Ring
1 Ring
1186
A List of Parameters
Configuration Guidelines
None.
If the reachability of 0 Ring (or 1 Ring) is Reachable, no fault occurs on the link between
this node and another node on 0 Ring (or 1 Ring). Services of this node can be directly
forwarded to another node on 0 Ring (or 1 Ring).
If the reachability of 0 Ring (or 1 Ring) is Unreachable, a fault occurs on the link between
this node and another node on 0 Ring (or 1 Ring). Services between this node and another
node are unavailable on 0 Ring (or 1 Ring).
Values
Value Range
Default Value
Reachable, Unreachable
Unreachable
Description
Reachable
Indicates that the link between this node and another node on 0 Ring (or
1 Ring) is available.
Unreachable
Indicates that the link between this node and another node on 0 Ring (or
1 Ring) is unavailable.
Configuration Guidelines
None.
1187
A List of Parameters
Values
Value Range
Default Value
0-255
When a node is unreachable, the value 255 is displayed indicating that the node hop is invalid.
Configuration Guidelines
None.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
1-255
1188
A List of Parameters
Configuration Guidelines
None.
West
S2
East
1 Ring
East
West
RPR Network
S0
S3
S4
S5
West
East
West
East
West
East
OptiX NE
Every node on the RPR network has two directions. East indicates the transmit direction of 0
Ring and the receive direction of 1 Ring, and West indicates the receive direction of 0 Ring and
transmit direction of 1 Ring. Services on 0 Ring are transmitted from West to East, and services
on 1 Ring are transmitted from East to West.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
East, West
1189
A List of Parameters
Description
East
East of the node indicates the transmit direction of 0 Ring and the receive
direction of 1 Ring.
West
West of the node indicates the receive direction of 0 Ring and the transmit
direction of 1 Ring.
Configuration Guidelines
None.
Values
This parameter is for query only. Each node always has 16 ECHO paths, which are numbered
from 1 to 16.
Configuration Guidelines
None.
1190
A List of Parameters
The RPR OAM function is used for the configuration management, fault management and
performance management on the RPR network. The ECHO function is an RPR OAM function,
used for connection monitoring and fault localization of the two nodes on the ring network.
The ECHO frame is an OAM request or response frame. That is, an ECHO request frame is
transmitted from the source address to the destination address. The destination address receives
and resolves the request frame, and then transmits an ECHO response frame to the request node.
The request node analyzes the link connection situation according to the received response
frame.
Values
Value Range
Default Value
Stop
Description
Start
Indicates that the ECHO function of this ECHO channel is enabled. That is, the
source node starts to transmit an ECHO request frame to the destination node.
Stop
Indicates that the ECHO function of this ECHO channel is disabled. That is, the
source node stops transmitting an ECHO request frame to the destination node.
Clear
Indicates that the statistics information about this ECHO channel is cleared. The
statistics information includes the number of ECHO messages that are
transmitted, the number of ECHO messages that are processed successfully, the
number of ECHO messages that are processed unsuccessfully, and whether the
LOC and dLoc alarms are detected.
Configuration Guidelines
This parameter is used to determine whether the ECHO function the ECHO channel is enabled.
To test the link connectivity between the two nodes, you need to enable the ECHO function of
the ECHO channel after configuring the parameters of the channel. After the test, disabling the
ECHO function of the ECHO channel to avoid wasting the bandwidth of the ring network and
the CPU recourses.
You are recommended to disable the ECHO function before clearing the statistics information
about the ECHO channel.
Issue 03 (2012-11-30)
1191
A List of Parameters
Values
Value Range
Default Value
Default Directionality
Description
0 Ring
Indicates transmitting the ECHO request frame from 0 Ring after the ECHO
function of the ECHO channel is enabled.
1 Ring
Indicates transmitting the ECHO request frame from 1 Ring after the ECHO
function of the ECHO channel is enabled.
Default
Directiona
lity
Indicates that the protocol automatically chooses the shortest path as the transmit
direction of the ECHO request frame.
Configuration Guidelines
The ECHO frame is used to test the link connection situation between the two nodes. The user
can specify the transmit direction of the ECHO request frame according to the test requirements.
Issue 03 (2012-11-30)
1192
A List of Parameters
Values
Value Range
Default Value
Default Directionality
Description
0 Ring
Indicates transmitting the ECHO response frame from 0 Ring after the ECHO
function of the ECHO channel is enabled.
1 Ring
Indicates transmitting the ECHO response frame from 1 Ring after the ECHO
function of the ECHO channel is enabled.
Default
Directiona
lity
Indicates that the protocol automatically chooses the shortest path as the transmit
direction of the ECHO response frame.
Backward
Indicates transmitting the ECHO response frame from the direction that is
opposite to the request direction after the ECHO function of the ECHO channel
is enabled.
Configuration Guidelines
The ECHO frame is used to test the link connection situation between the two nodes. The user
can specify the transmit direction of the ECHO response frame according to the test
requirements.
Issue 03 (2012-11-30)
1193
A List of Parameters
Values
Value Range
Default Value
A0, A1, B, C
A0
Description
A0
Indicates that the ECHO frame is transmitted with priority A0 on the ring
network and occupies bandwidth A (including bandwidths A0 and A1).
A1
Indicates that the ECHO frame is transmitted with priority A1 on the ring
network and occupies bandwidth A (including bandwidths A0 and A1).
Indicates that the ECHO frame is transmitted with priority B on the ring network
and occupies bandwidth B (including bandwidths B_CIR and B_EIR).
Indicates that the ECHO frame is transmitted with priority C on the ring network
and occupies bandwidth C.
Configuration Guidelines
The ECHO frame is used to test the link connection situation between the two nodes. The user
can specify ECHO Frame Service Type according to the test requirements. For example, to
test the connectivity of service A0, the user sets ECHO Frame Service Type to A0.
1194
A List of Parameters
When the ECHO frame is set to Yes, if the protection status is set to Wrapping or Wrap
and Steering, the ECHO request frame or the response frame that is transmitted in the
channel is switched when the frame passes through a node in the protection switching status;
if the protection status is set to Steering, the ECHO request frame or the response frame
that is transmitted in the channel is discarded when the frame passes through a node in the
protection switching status.
If the ECHO frame is set to NO, the ECHO request or response frame that is transmitted
in the channel is discarded when it passes through a node in the switching status.
Values
Value Range
Default Value
Yes, No
No
Description
Yes
No
Configuration Guidelines
The user can specify Is Path Protected according to the test requirements.
1195
A List of Parameters
Values
Value Range
Default Value
Unit
100
ms
Configuration Guidelines
The user can modify the value of this parameter according to the ring network situation. The
modification should not go beyond the value range.
Values
Value Range
Default Value
Unit
100
ms
Configuration Guidelines
The user can modify the value of this parameter according to the ring network situation. The
modification should not go beyond the value range.
Issue 03 (2012-11-30)
1196
A List of Parameters
Values
For example, the value 100 indicates that the node receives 100 ECHO messages.
Configuration Guidelines
None.
Values
For example, the parameter value "100" indicates that the node receives 100 ECHO messages
that are processed successfully.
Configuration Guidelines
None.
1197
A List of Parameters
Values
For example, the parameter value 100 indicates that the node receives 100 ECHO messages that
are processed unsuccessfully.
Configuration Guidelines
None.
Values
Value Range
Default Value
Yes, No
No
1198
A List of Parameters
Value
Description
Yes
No
Indicates that the node does not detect any dLoc alarm.
Configuration Guidelines
None.
Values
Value Range
Default Value
Yes, No
No
Description
Yes
No
Indicates that the node does not detect any LOC alarm.
Configuration Guidelines
None.
Issue 03 (2012-11-30)
1199
A List of Parameters
Values
Value Range
Default Value
Idle
Issue 03 (2012-11-30)
Value
Description
Force
Switching
SF
Indicates that the RPR node is in the automatic switching status resulted from
the signal fail (SF) condition.
SD
Indicates that the RPR node is in the automatic switching status resulted from
the signal degrade (SD) condition.
Manual
Switching
Wait-toRestore
Idle
1200
A List of Parameters
These switching request signals have the priorities from high to low: FS, SF, SD, MS, WTR,
IDLE. The FS and SF have the highest and equal priorities. That is, FS and SF can preempt
switching signals of each other. The switching signals of higher priorities can preempt the
switching signals of lower priorities. The switching schemes whose priority is higher than SF
can be used to protect the RPR network at the same time. The switching schemes whose priority
is lower than SF should not be used in the RPR network at the same time. The switching request
of higher priorities contends the switching request of lower priorities. The RPR node can change
to the WTR status only when it recovers from the SD or SF status.
Configuration Guidelines
None.
Values
Value Range
Default Value
Not switched
Description
Not switched
Indicates that the node is not in the switching status in this direction.
Switched
Configuration Guidelines
None.
1201
A List of Parameters
Values
For example, if the displayed value is 10, the accumulated protection count is 10 after the RPR
protocol is enabled on the node.
Configuration Guidelines
None.
Values
For example, if the displayed value is 10, the accumulated protection duration is 10 seconds
after the RPR protocol is enabled on the node.
Configuration Guidelines
None.
Issue 03 (2012-11-30)
1202
A List of Parameters
Values
Value Range
Default Value
Idle
Description
Force Switching
Indicates that the last switching request issued to the node is a forced
switching.
Manual Switching
Idle
Configuration Guidelines
None.
1203
A List of Parameters
Values
Value Range
Default Value
Description
Force Switching
When adding a node to the ring network, you need to protect the current
services. Because the link is normal, a switching cannot be performed
automatically but can be performed forcibly only. This protection
request is with the highest priority and cannot be preempted by other
protection requests on the ring network.
Manual Switching
Clear
Configuration Guidelines
Usually, the protection request is required only when the network topology changes. The user
can determine whether to issue the switching request to the node according to the actual situation.
If the user allows the switching request to be preempted by the SF/SD, a manual switching
request is issued. Make sure that all switchings at this node are cleared after all the operations
are completed.
Issue 03 (2012-11-30)
1204
A List of Parameters
Values
Value Range
Default Value
Manual, Static
Static
Description
Manual
The LACP protocol is not enabled. The link status, rate, and
duplex mode of a port determine whether the port can carry
services.
Static
Configuration Guidelines
If the user does not require the LACP protocol, set the LAG Type parameter to Manual.
If the user requires the LACP protocol and the two ends use the LACP protocol, set the LAG
Type parameter to Static.
1205
A List of Parameters
Related Information
None.
Values
Value Range
Default Value
Revertive, Non-Revertive
Non-Revertive
Description
Revertive
Non-Revertive
After the original working port recovers, the services are not
switched.
Configuration Guidelines
To switch the services only when the service-carried port fails, set the Revertive Mode
parameter to Non-Revertive.
To ensure that only the port with the highest priority carry services whenever the port is fine,
set the Revertive Mode parameter to Revertive.
Related Information
None.
Issue 03 (2012-11-30)
1206
A List of Parameters
Values
Value Range
Default Value
Sharing, Non-Sharing
Non-Sharing
Description
Non-Sharing
Indicates that only one port carries the services. Only one
slave port can be configured.
Sharing
Configuration Guidelines
When manually configuring the aggregation group, configure the load sharing modes at the two
ends to the same. Otherwise, certain service packets are lost.
Related Information
None.
Issue 03 (2012-11-30)
1207
A List of Parameters
Values
Value Range
Default Value
Automatic
Description
Automatic
Source MAC
Destination MAC
Configuration Guidelines
To evenly distribute the traffic on each port as possible, select a proper Hash algorithm according
to different packets on the ports.
Issue 03 (2012-11-30)
1208
A List of Parameters
Values
Value Range
Default Value
32768
Configuration Guidelines
To adopt the result computed by the selection logic of a static LAG, set a higher system priority
for this static LAG.
Related Information
None.
1209
A List of Parameters
Values
Value Range
Default Value
Configuration Guidelines
Every Ethernet port on the NE can be used as the main port.
The main port and the slave port must be of the same type. The rate of the main port must be
the same as the rate of the slave port.
Related Information
None.
Values
Value Range
Default Value
Unknown
Issue 03 (2012-11-30)
Value
Description
In Service
Indicates that the port can be added to an LAG and can carry
the service.
Out of Service
1210
A List of Parameters
Configuration Guidelines
This parameter is used for query only. No rule is specified for selecting a value.
Related Information
None.
Values
Value Range
Default Value
0-65535
32768
Configuration Guidelines
The port priority increases as the value decreases.
To make a port carry services with priority, set the priority of the port to a higher value.
Issue 03 (2012-11-30)
1211
A List of Parameters
NEA
Port 1
Working
Port 1
NEB
Port 2
Port 2
Protection
LAGa
LAG b
Each port of LAG a and LAG b meets the requirements of carrying services. System Priority
of LAG a is higher than System Priority of LAG b. In LAG a, Port Priority of port1 is higher
than Port Priority of port2. In LAG b, Port Priority of port2 is higher than Port Priority of
port 1.
In this case, in LAG a, port1 is the working port, port2 protects port1, and port2 does not share
the service traffic. The protection relation in LAG b is the same as the protection relation in LAG
a, because System Priority of LAG a is higher than System Priority of LAG b. That is, in LAG
b, port1 is the working port, port2 protects port1, and port2 does not share the service traffic,
even if Port Priority of port2 is higher than Port Priority of port1 in LAG b.
Related Information
For the setting of this parameter, see System Priority.
Issue 03 (2012-11-30)
1212
A List of Parameters
Values
Value Range
Default Value
1-15
Configuration Guidelines
You need to enter this parameter when creating MCSP channels. This parameter also can be
allocated automatically.
Values
Value Range
Default Value
Unit
1-10
Second
Configuration Guidelines
This parameter must be set according to the network planning and requirements. Generally, the
smaller the value of this parameter, the shorter the interval (s) for transmitting Hello packets to
the opposite end, and the faster the speed of detecting whether the link is normal.
1213
A List of Parameters
When the value of Timeout Time (s) of received Hello messages exceeds the specified threshold,
the opposite end is faulty and the relevant alarm is reported.
Values
Value Range
Default Value
Unit
30-3600
600
Second
Configuration Guidelines
You need to set this parameter according to the network planning and requirement. This
parameter is related to the Hello Interval parameter set on the opposite end. The smaller the
value of this parameter, the faster the speed of detecting a fault on the link.
Values
Issue 03 (2012-11-30)
Valid Values
Default Value
32768
1214
A List of Parameters
Configuration Guidelines
If the value of Port Priority is smaller, the priority is higher.
When using a port to carry the services, set Port Priority to a smaller value. Otherwise, set Port
Priority to a greater value.
Values
Valid Values
Default Value
32768
Configuration Guidelines
To take the result selected by the static link aggregation group as the actual value, set System
Priority to a smaller value.
Issue 03 (2012-11-30)
1215
A List of Parameters
Values
Board Name
Value Range
Default Value
N1EAS2
PORT1-PORT2
VCTRUNK1-VCTRUNK34
N1EGS4, N3EGS4, N4EGS4
PORT1-PORT4
N1EMS4
PORT1-PORT20
Description
Configuration Guidelines
Add the relevant slave port to the link aggregation group as required. The maximum number of
slave ports differs with the board type.
Board
Issue 03 (2012-11-30)
Non-Sharing
N1EAS2
23
N1EGS4, N3EGS4,
N4EGS4
15
1216
A List of Parameters
Board
N1EMS4
Sharing
Non-Sharing
15
Values
Valid Values
Default Value
Unknown
Value
Description
Unknown
In Service
Out of Service
Configuration Guidelines
This parameter is used for query only. No rules are provided for selecting a value.
Issue 03 (2012-11-30)
1217
A List of Parameters
For static link aggregation, the LACP protocol is used. The member port state in the link
aggregation group is decided by these parameters, such as port working mode, port working
rate, port priority, and link aggregation group priority.
For manual link aggregation, the LCAP protocol is not used. The member port state in the
link aggregation group is not related to these parameters, such as port working mode and
port working rate.
Values
Board Name
Valid Values
Default Value
N2EGR2, N2EGS2,
N3EGS2
PORT1-PORT2
N2EFS4, N3EFS4
PORT1-PORT4
PORT1-PORT8
N2EMR0
PORT1-PORT13
N1EFS0A
PORT1-PORT16
N1EMS2
PORT1-PORT18
Issue 03 (2012-11-30)
Value
Description
1218
A List of Parameters
Configuration Guidelines
Add a relevant branch port to a link aggregation group as required.
Values
Value Range
Default Value
Sharing, Non-Sharing
Sharing
Description
Sharing
Non-Sharing
Configuration Guidelines
If the bandwidth needs to be increased and several ports need to be enabled to share the service,
select the load sharing mode. If only one port needs to carry the service and protection is required
for this port, select the load non-sharing mode.
Issue 03 (2012-11-30)
1219
A List of Parameters
If the service needs to be switched back to the main board, set Revertive Mode to
Revertive.
If a service need not be switched back to the main board and is still transmitted at the slave
board, set Revertive Mode to Non-Revertive.
Values
Valid Values
Default Value
Revertive, Non-Revertive
Revertive
Configuration Guidelines
l
To avoid frequent switching and to switch the service only when the working board port
fails, select Non-Revertive.
If the user requires that the main port carries the service if it works normally, select
Revertive.
1220
A List of Parameters
Values
Valid Values
Default Value
0-65535
32768
Configuration Guidelines
If the value of Port Priority is smaller, the priority is higher.
To enable a port to be preferred to carry the services, set Port Priority to a smaller value.
Otherwise, set Port Priority to a greater value.
For a DLAG, to enable the main port to be preferred to carry the services, set the priority of the
main port in the DLAG higher than that of the slave port.
Values
Valid Values
Default Value
0-65535
32768
Configuration Guidelines
If the value of Port Priority is smaller, the priority is higher.
To enable a port to be preferred to carry the services, set Port Priority to a smaller value.
Otherwise, set Port Priority to a greater value.
For a DLAG, to enable the slave port to be preferred to carry the services, set the priority of the
slave port in the DLAG higher than that of the main port.
Issue 03 (2012-11-30)
1221
A List of Parameters
Values
Value Range
Default Value
Enabled, Disabled
Disabled
Configuration Guidelines
The user can set this parameter according to the actual service requirement.
Related Information
The rapid spanning tree protocol (RSTP) can realize all the functions of the spanning tree. Similar
to the STP, the RSTP avoids temporary loops. Different from the STP, the RSTP shortens the
time delay at the ports from blocking to forwarding, restores the network connectivity more
rapidly, and provides better services.
Issue 03 (2012-11-30)
1222
A List of Parameters
The STP is a Layer 2 management protocol that avoids Layer 2 loops by selectively
blocking redundant network links and supports the link backup.
The RSTP develops from the STP and shortens the convergence time.
Values
Value Range
Default Value
STP, RSTP
RSTP
Configuration Guidelines
The RSTP and STP can be configured at the same time. The RSTP is compatible with the STP.
It is recommended that you use the default value RSTP.
Issue 03 (2012-11-30)
1223
A List of Parameters
Values
Value Range
Default Value
32768
Configuration Guidelines
Set this parameter according to what role the user expects the bridge to play in the spanning tree
topology.
Values
Value Range
Default Value
Unit
6-40
20
Configuration Guidelines
When you set the value of this parameter, ensure that the following requirement is met:
2 x (Hello Time + 1) Max Age 2 x (Forward Delay - 1)
Issue 03 (2012-11-30)
1224
A List of Parameters
Values
Value Range
Default Value
Unit
1-10
Configuration Guidelines
When you set the value of this parameter, ensure that the following requirement is met:
2 x (Hello Time + 1) Max Age 2 x (Forward Delay - 1)
Issue 03 (2012-11-30)
1225
A List of Parameters
Values
Value Range
Default Value
Unit
4-30
15
Configuration Guidelines
When you set the value of this parameter, ensure that the following requirement is met:
2 x (Hello Time + 1) Max Age 2 x (Forward Delay - 1).
Values
Value Range
Default Value
1-10 times/s
Configuration Guidelines
Set this parameter according to the actual requirement of the user. It is recommended that you
use the default value.
1226
A List of Parameters
bridges that are connected to the network segment through the bridge ports. In this case, the
bridge whose cost is the smallest is selected as the designated bridge. If the root path cost values
of two or more bridges are the same and the smallest, the bridge with a higher priority is selected
as the root bridge.
In the case of non-root bridges, the root path cost of each bridge is equal to the sum of path cost
values of each port on the other bridges that a non-root bridge passes when the bridge receives
the frame from the root bridge along the minimum cost path. That is, the value of the root path
cost is the sum of the path cost values of all bridges.
Values
Based on the protocol, the value of this parameter is calculated according to the network
topology. This parameter is used for querying.
Configuration Guidelines
There are no principles for setting the value of this parameter because this parameter is used for
querying.
Values
This parameter is used for querying.
Configuration Guidelines
There are no principles for setting the value of this parameter because this parameter is used for
querying.
A.20.10 Port ID
Description
The Port ID parameter contains 16 bits, which show the port priority and the unique port number
in the bridge. The first eight bits indicate the port priority, and the later eight bits indicate the
port number. The port ID represents the priority in the spanning tree. If the value of the port ID
is smaller, the port priority in the bridge is higher. To enable the RSTP to be compatible with
Issue 03 (2012-11-30)
1227
A List of Parameters
the STP, the port priority is represented by eight bits, of which the later four bits are 0 for easy
management.
Values
The parameter value is in decimal system. For example, Port ID = 32769.
Issue 03 (2012-11-30)
Parameter
Link Speed
Recommende
d value
Recommende
d range
Range
Path Cost
<=100 Kb/s
200000000
20000000-2000
00000
1-200000000
1 Mb/s
20000000
2000000-20000
0000
1-200000000
10 Mb/s
2000000
200000-200000
00
1-200000000
100 Mb/s
200000
20000-2000000
1-200000000
1 Gb/s
20000
2000-200000
1-200000000
10 Gb/s
2000
200-20000
1-200000000
100 Gb/s
200
20-2000
1-200000000
1 Tb/s
20
2-200
1-200000000
1228
Parameter
A List of Parameters
Link Speed
Recommende
d value
Recommende
d range
Range
10 Tb/s
1-20
1-200000000
Values
Board Name
Value Range
Default Value
N4EFS0, N2EFS4,
N2EGS2, N2EMR0,
N2EGR2, N1EMS4,
N1EGS4, N3EGS4,
N1EAS2
1-200000000
l 19 (FE port)
l 4 (GE port)
l 2 (VCTRUNK, 10 GE
and RPR ports)
Configuration Guidelines
If the port rate is greater, the port path cost is smaller. Generally, use the default value.
Issue 03 (2012-11-30)
Parameter
Link Speed
Recommende
d value
Recommende
d range
Range
Path Cost
<=100 Kb/s
200000000
20000000-2000
00000
1-200000000
1229
Parameter
A List of Parameters
Link Speed
Recommende
d value
Recommende
d range
Range
1 Mb/s
20000000
2000000-20000
0000
1-200000000
10 Mb/s
2000000
200000-200000
00
1-200000000
100 Mb/s
200000
20000-2000000
1-200000000
1 Gb/s
20000
2000-200000
1-200000000
10 Gb/s
2000
200-20000
1-200000000
100 Gb/s
200
20-2000
1-200000000
1 Tb/s
20
2-200
1-200000000
10 Tb/s
1-20
1-200000000
Values
Value Range
Default Value
1-65535
Description
1-65535
Configuration Guidelines
If the port rate is greater, the designed path cost is smaller. It is recommended that you use the
default value.
1230
A List of Parameters
ID consists of the priority and MAC address of the bridge. The bridge whose ID is the smallest
is selected as the root bridge in this network.
Values
Value Range
Default Value
0-65535
32768
Configuration Guidelines
There are no specific principles for setting the value of this parameter because this parameter is
used for querying.
Values
Value Range
Default Value
0-61440
32768
Configuration Guidelines
If the value of the parameter is smaller, the priority of the bridge is higher. Set this parameter
according to the actual network condition.
1231
A List of Parameters
To ensure that the bridge protocols operate normally, the following requirements should be met:
l
The multicast MAC address must be unique and be identified by all the bridges in the LAN.
The multicast MAC address identifies the protocol entities of a bridge that is connected to
different and individual physical network segments.
Ports on a bridge have port IDs, which are different from each other. Ports IDs of different
bridges are different. The values of these IDs can be assigned independently. The values
can also be used by other bridges.
Each bridge must provide the values of the parameters that are described previously or
provide the mechanism for assigning values for these parameters.
Values
Value Range
Default Value
00-00-00-00-00-00
Configuration Guidelines
It is recommended that you use the default value.
Issue 03 (2012-11-30)
1232
A List of Parameters
Values
Value Range
Default Value
Enabled, Disabled
Disabled
Description
Enabled
Indicates that the STP is enabled on the local port and that this
port is involved in the calculation.
Disabled
Indicates that the STP is disabled on the local port and this
port is not involved in the calculation.
Configuration Guidelines
Set this parameter according to the networking condition. If the topology calculation requires a
VB, enable the STP for this VB. Otherwise, the STP is disabled for all VBs.
The STP of a port can be enabled or disabled only after the STP is enabled for a VB.
Values
Value Range
Default Value
Enabled, Disabled
Disabled
Configuration Guidelines
You can set this parameter based on the network topology.
Issue 03 (2012-11-30)
1233
A List of Parameters
Values
Value Range
Default Value
Enabled, Disabled
Disabled
Description
Enabled
Disabled
Configuration Guidelines
The user can set this parameter according to the network topology to reduce the delay in the case
of a state migration of the network edge port.
1234
A List of Parameters
other ports. In the same bridge, if the overhead of the root path is the same for multiple ports,
compare the specified bridge IDs, and then compare the specified port IDs.
Values
Value Range
Default Value
128
Configuration Guidelines
The value of VB Port Priority must be an integer multiplied by 16. If the value is smaller, the
priority is higher. You can set the value based on the requirement of the user.
Values
Valid Values
Default Value
Discarding, Forwarding,
Learning
Forwarding
Issue 03 (2012-11-30)
Value
Description
Discarding
Forwarding
1235
A List of Parameters
Value
Description
Learning
Values
Valid Values
Default Value
Adaptive connection
Issue 03 (2012-11-30)
Value
Description
Adaptive connection
Shared media
Link connection
If the port has the point-to-point attribute, the port state can
be transited rapidly.
1236
A List of Parameters
Configuration Guidelines
If the port connection mode is known, select Shared media or Link connection. Otherwise,
select Adaptive connection.
Values
Value Range
Default Value
Enabled, Disabled
Disabled
Description
Disabled
Enabled
Configuration Guidelines
You can set Enabling LCAS as required.
1237
A List of Parameters
Values
Value Range
Default Value
Huawei Mode
Description
Huawei Mode
Standard Mode
Configuration Guidelines
To set the LCAS mode, follow the principles:
l
If the interconnected equipment at the two ends are Huawei equipment, select Huawei
Mode.
1238
A List of Parameters
Values
Value Range
Default Value
Unit
0, 2000-10000
2000
ms
Configuration Guidelines
The User can set this parameter according to the expected hold off time of LCAS switching.
Values
Value Range
Default Value
Unit
0-720
300
Second
Configuration Guidelines
The User can set this parameter according to the expected WTR duration of LCAS recovery.
1239
A List of Parameters
Values
Value Range
Default Value
Enabled, Disabled
Disabled
Configuration Guidelines
You can set whether to enable the TSD as required.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
2-256
256
1240
A List of Parameters
Configuration Guidelines
Set this parameter according to the actual requirement of the user.
Values
Value Range
Default Value
Unbound, Bound
Unbound
Description
Bound
Indicates that the LTP function is already bound with the service.
Unbound
Indicates that the LTP function is not bound with the service.
Configuration Guidelines
You need to configure a service before setting the binding status between the LTP function and
the service. This parameter is defaulted to Unbound.
1241
A List of Parameters
The primary function point is the master mode that operates the LPT protocol and determines
the LPT protocol status based on the operating information. The secondary function point senses
or transmits the status change information, such as the status change of a port or a remote node.
The point-to-multipoint LPT includes a primary function point and multiple second function
points, which respectively correspond to the root and leaves in the topology.
Values
Value Range
Default Value
Slot ID-Board-Port ID
Description
Slot ID-BoardPort ID
Indicates the port ID, board and slot ID of the primary function
point.
Configuration Guidelines
None.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
PW, UNI
1242
A List of Parameters
Description
PW
UNI
Configuration Guidelines
In packet mode, LPTs are classified as point-to-point LPT and point-to-multipoint LPT. NEs
are classified as Primary Function Point and Secondary Function Point. The Secondary
Function Point Type can be set only when you configure the point-to-point LPT.
Values
Value Range
Default Value
Slot ID-Board-Port
Issue 03 (2012-11-30)
Value
Description
Slot ID-BoardPort
1243
A List of Parameters
Configuration Guidelines
None.
Values
Value Range
Default Value
LPT OAM
Description
LPT OAM
PW OAM
Indicates that the LPT protocol suite determines the LPT link
status according to the protocol packets and the PW OAM status
reported by a certain board. The LPT protocol suite regards that
the LPT link is in good status only when both the negotiation
status of the protocol packets and the PW OAM status are normal.
Configuration Guidelines
If the LPT detection modes differ at both ends, the LPT fails to work normally.
Issue 03 (2012-11-30)
1244
A List of Parameters
Values
Value Range
Default Value
CLOSE, OPEN
OPEN
Description
OPEN
CLOSE
Configuration Guidelines
When the LPT protocol works normally, OPEN is displayed. When a fault occurs on the network
side, the OptiX OSN equipment shuts down the laser on the local end and reports CLOSE.
A.23.1 LPT
Description
The LPT parameter specifies whether the link state pass through (LPT) function is enabled. The
LPT is a technology developed by Huawei to increase the speed of the link state response.
Through the LPT protocol, the faults on the service access point and in the intermediate network
can be detected and reported.
Issue 03 (2012-11-30)
1245
A List of Parameters
Values
Value Range
Default Value
Yes, No
No
Description
Yes
No
Configuration Guidelines
Set this parameter according to the actual requirement of the user. Set this parameter to Yes if
the LPT function is required.
Values
Value Range
Default Value
GFP(HUAWEI)
1246
A List of Parameters
Value
Description
GFP(HUAWEI)
Ethernet
GFP(CSF)
Configuration Guidelines
Set this parameter according to the actual requirement of the user. Ensure that the configurations
of the two interconnected ports are consistent.
Values
Value Range
Default Value
Unit
0-100000
ms
Configuration Guidelines
Set this parameter according to the actual requirement of the user.
1247
A List of Parameters
Values
Value Range
Default Value
Unit
0-100000
ms
Configuration Guidelines
Set this parameter according to the actual requirement of the user.
If the IGMP Snooping protocol is enabled, the Ethernet physical port captures and analyzes
the received IGMP packets. Then the Ethernet physical port registers the multicast
information to generate the router port and the multicast group. Finally the Ethernet physical
port transparently transports the packets.
If the IGMP Snooping protocol is disabled, the Ethernet physical port does not analyze the
received IGMP packets. Instead, the Ethernet physical port broadcasts the IGMP packets
as ordinary multicast packets.
Values
Valid Values
Default Value
Enabled, Disabled
Disabled
1248
A List of Parameters
Value
Description
Enabled
Disabled
Configuration Guidelines
l
If the aging time is long, and if the multicast MAC address table in the board fails to be
updated in time, the board forwards the service packets incorrectly. Consequently, the
forwarding efficiency is decreased.
If the aging time is short, the multicast MAC address table may be updated rapidly.
Moreover, a great number of received multicast service packets fail to be found in the MAC
address table. Consequently, the board broadcasts these data packets to all the ports. As a
result, the forwarding efficiency is also decreased.
Values
Valid Values
Default Value
Unit
Min
Configuration Guidelines
Generally, select the default value, unless otherwise specified. Otherwise, set the value according
to the requirements. Do not set the value beyond the range allowed by the board.
Issue 03 (2012-11-30)
1249
A List of Parameters
Values
Value Range
Default Value
Unit
0-255
Unit
Configuration Guidelines
The user can set the number of test packets to be transmitted as required.
A.25.2 Status
Description
Status indicates the transmit status of the current test frames at the port. This parameter is
displayed as the current test status after you configure Send Mode and Frames to Send and
then click Apply.
Issue 03 (2012-11-30)
1250
A List of Parameters
Values
Value Range
Default Value
Description
Sending
Finished Sending
Configuration Guidelines
None.
Values
For example, the parameter value 5indicates that the port transmits five test frames.
Configuration Guidelines
None.
1251
A List of Parameters
Values
For example, the parameter value 5indicates that this port receives five response test frames.
Configuration Guidelines
None.
Values
For example, the parameter value 5indicates that the port receives five test frames.
Configuration Guidelines
None.
1252
A List of Parameters
Values
Board Name
Value Range
Default Value
l /
GFP
l GFP
l Ethernet
GFP
Description
GFP
Ethernet
Configuration Guidelines
The Bearer Mode values of the test frame at the two ends of the SDH link must be consistent
so that the test frame can function properly.
1253
A List of Parameters
Values
Valid Values
Default Value
Disabled
Description
Disabled
Burst Mode
Continue Mode
Configuration Guidelines
Set this parameter according to the requirements of the test.
l
1254
A List of Parameters
Values
Valid Values
Default Value
Unit
1-9
Configuration Guidelines
For all the network elements (NEs) that communicate with each other over the orderwire phone,
this parameter must be set to the same value.
l
If the number of NEs is less than 30, usually, set the value to 5 seconds.
If the number of NEs is not less than 30, usually, set the value to 9 seconds.
Values
Valid Values
Default Value
100-99999999
999
Configuration Guidelines
l
For orderwire conference calls on each node of the same subnet, the phone number must
be the same.
The length of an orderwire conference call number can be set as required. The value range
is 3-8.
The length of an orderwire conference call number must be consistent with that of the
addressing call number.
Issue 03 (2012-11-30)
1255
A List of Parameters
A.26.3 Phone
Description
The Phone parameter specifies the phone numbers of orderwire addressing calls. An addressing
call refers to a point-to-point call.
Values
Valid Values
Default Value
100-99999999
101
Configuration Guidelines
l
The phone numbers of orderwire addressing calls cannot be duplicate within the same
subnet.
The length of an orderwire call number can be set as required. The value range is 3-8.
Within the same orderwire network, the length of orderwire call numbers must be consistent
for each node.
The length of phone numbers used to make orderwire addressing calls must be consistent
with that of conference call numbers.
1256
A List of Parameters
Values
Valid Values
Default Value
Bid-BidType-PortID
Description
Bid-BidType-PortID
Configuration Guidelines
None.
Values
Issue 03 (2012-11-30)
Valid Values
Default Value
Bid-BidType-PortID
1257
A List of Parameters
Description
Bid-BidType-PortID
Configuration Guidelines
The NG-SDH equipment supports the function of automatically releasing the ring of an
orderwire conference call. For this reason, when the NG-SDH equipment is interconnected to
each other, do not set this parameter when you make an orderwire conference call. When the
NG-SDH equipment is interconnected to other equipment (for example, the OptiX OSN 9500),
you can make orderwire conference calls only after Selected Conference Call Port is selected
for the interconnected optical ports.
Values
Valid Values
Default Value
1, 2
Issue 03 (2012-11-30)
Value
Description
1258
A List of Parameters
Configuration Guidelines
Select the value according to the number of orderwire subnets.
l
If the number of orderwire subnets is less than 10, set Subnet No. Length to 1.
If the number of orderwire subnets is greater than 10, set Subnet No. Length to 2.
Values
Valid Values
Default Value
0-99
Configuration Guidelines
If the subnet number is set to 1, the value range is 0-9.
If the subnet number length is set to 2, the value range is 10-99.
For the optical ports in the same orderwire subnet, the subnet number must be the same.
1259
A List of Parameters
Values
Valid Values
Default Value
1-88
Description
1-60
1-88
NOTE
The OptiX OSN 1500A and the OptiX OSN 1500B do not support the F1 data ports that have the same
direction.
Configuration Guidelines
None.
Values
Issue 03 (2012-11-30)
Valid Values
Default Value
F1, Bid-BidType-PortID
1260
A List of Parameters
Description
F1
Bid-BidType-PortID
Configuration Guidelines
When using the F1 data port, you need to configure its route. That is, set the 64 Kbit/s data being
added to or dropped from the NE, or passing through the NE.
Values
Valid Values
Default Value
SERIAL1, SERIAL2,
SERIAL3, SERIAL4
SERIAL1
Issue 03 (2012-11-30)
Value
Description
SERIAL1
SERIAL2
SERIAL3
1261
A List of Parameters
Value
Description
SERIAL4
Configuration Guidelines
Select the value according to the configuration.
Values
Value
Valid Values
RS232, RS422
Description
RS232
RS422
Configuration Guidelines
Select the value according to the interface.
Issue 03 (2012-11-30)
1262
A List of Parameters
Values
Valid Values
Default Value
Description
No Data
SERIALx
Bid-BidType-PortID
Configuration Guidelines
Select the value according to the configuration.
Issue 03 (2012-11-30)
1263
A List of Parameters
Values
Valid Values
Default Value
SERIALx, Bid-BidTypePortID
Description
SERIALx
Bid-BidType-PortID
Configuration Guidelines
Select the value according to the configuration.
Issue 03 (2012-11-30)
1264
A List of Parameters
Values
Valid Value
Default Value
l 2 MHz
2 Mbit/s
l 2 Mbit/s
Description
2 MHz
2 Mbit/s
Configuration Guidelines
Input modes of the two channels of external clock signals can be set to 2 MHz or 2 Mbit/s. The
default input mode is 2 Mbit/s. In practical application, make sure that the output mode matches
the input mode on the receive end.
Related Information
None.
1265
A List of Parameters
timeslot. After starting the SSM protocol, make sure that the timeslot for receiving the S1 byte
is consistent with the timeslot for transmitting the S1 byte so that the S1 byte can be received
correctly.
Values
Valid Value
Default Value
l SA4
All versions
l SA5
l SA6
l SA7
l SA8
l All versions
Description
SA4
SA5
SA6
SA7
SA8
All versions
Configuration Guidelines
None.
1266
A List of Parameters
Related Information
None.
Values
Valid Value
Default Value
Threshold Disabled
Issue 03 (2012-11-30)
Value
Description
Threshold Disabled
1267
A List of Parameters
Configuration Guidelines
The output quality of the external clock source should not be inferior to the specified quality
threshold. Therefore, the quality threshold should be set to a value that is inferior to or equal to
the output clock quality of NEs.
Related Information
None.
Values
Valid Value
Default Value
l No Failure Condition
No Failure Condition
l AIS
l LOF
l AIS OR LOF
Issue 03 (2012-11-30)
Value
Description
No Failure Condition
AIS
LOF
1268
A List of Parameters
Value
Description
AIS OR LOF
Configuration Guidelines
A failure condition can be set as required.
Related Information
None.
Values
Valid Value
Default Value
l Send AIS
l 2M Output S1 Byte
Unavailable
Issue 03 (2012-11-30)
Value
Description
Send AIS
1269
A List of Parameters
Value
Description
2M Output S1 Byte
Unavailable
Configuration Guidelines
When the 2M phase-locked source is invalid, the output action can be set as required. When the
External Clock Output Mode parameter is set to 2 MHz, the external clock signal output is
shut down no matter what action is set.
Related Information
None.
Values
Value Range
Default Value
1, 2, 3
Issue 03 (2012-11-30)
Value
Description
1270
A List of Parameters
Value
Description
Configuration Guidelines
This parameter specifies the priority level of a certain clock source in the system clock priority
table. The value ranges from 1 to N. N represents the total number of clock sources in the system
clock priority table. Generally, the internal clock source is at the lowest priority level.
For example, there are four clock sources in the system clock priority table: one external clock
source, one line clock source, one tributary clock source, and one internal clock source. The
priority level of the external clock source is set to 1, the priority level of the line clock source is
set to 2, the priority level of the tributary clock source is set to 3 (the priority levels of the external
clock source, line clock source, and tributary clock source cannot be the same), and the priority
level of the internal clock source is always set to 4.
In the actual application, this parameter is set according to the specific networking situation.
Values
Value Range
Default Value
l No Threshold Value
No Threshold Value
1271
A List of Parameters
Value
Description
No Threshold Value
Threshold disabled
Configuration Guidelines
In actual application, the output quality threshold of external clock source should be determined
according to the quality information about the NE clock and the opposite NE.
Related Information
None.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
1272
A List of Parameters
Configuration Guidelines
The protection status of the entire clock subnet should be consistent. To avoid a clock tracing
loop on a ring or mesh network, it is recommended that you set this parameter to Start Extended
SSM Protocol.
Related Information
As a mechanism adopted by the synchronous network for synchronization management, the
standard SSM protocol is loaded in the lower four bits of the S1 byte. The standard SSM protocol
allows nodes to exchange the quality information of clock sources. Hence, the SSM protocol
enables the system to automatically select the clock source with the highest priority and also
avoids a timing loop. The standard SSM protocol is applicable to interconnection with the
equipment of other suppliers.
Based on the standard SSM protocol, the extended SSM protocol presents a concept of clock
source ID. The higher four bits of the S1 byte indicate a unique clock source ID, which is
transmitted with the SSM. When receiving the S1 byte, a node checks the clock source ID to
see whether the clock source derives from itself. If the clock source derives from the node, the
node considers the clock source as unavailable. In this manner, a clock inter-lock loop is avoided
when the clock tracing path is configured as a ring. The extended SSM protocol is applicable to
the interconnection of transmission equipment from Huawei.
When the SSM protocol is disabled, the clock source is selected according to the priority table.
Values
Value Range
Default Value
l Yes
No
l No
1273
A List of Parameters
Value
Description
Yes
No
Configuration Guidelines
It is recommended to set the AIS alarm as the condition for triggering clock source switching
in actual application to ensure system performance.
Related Information
None.
Values
Value Range
Default Value
l Yes
No
l No
Issue 03 (2012-11-30)
Value
Description
Yes
1274
A List of Parameters
Value
Description
No
Configuration Guidelines
A B1 BER threshold-crossing alarm indicates that the transmitted signal and the clock in the
signal are being interfered. Therefore, this parameter can be set as a condition for triggering
clock source switching in actual application to ensure system performance.
Related Information
None.
Values
Value Range
Default Value
l Yes
No
l No
Issue 03 (2012-11-30)
Value
Description
Yes
No
1275
A List of Parameters
Configuration Guidelines
A B2-EXC alarm indicates that the transmitted signal and the clock in the signal are being
interfered. Therefore, this parameter can be set as a condition for triggering clock source
switching in actual application to ensure system performance.
Related Information
None.
Values
Value Range
Default Value
l Non-Revertive
Auto-Revertive
l Auto-Revertive
Description
Non-Revertive
Auto-Revertive
Configuration Guidelines
If the conditions for clock source switching are properly set and the switching of clock sources
can be guaranteed, the Auto-Revertive mode can be selected to improve clock quality.
Otherwise, the Non-Revertive mode is recommended to avoid clock jitters.
Issue 03 (2012-11-30)
1276
A List of Parameters
Related Information
None.
Values
Value Range
Default Value
0-12
Configuration Guidelines
The WRT time is counted in minutes. The shorter the WTR time is, the faster the clock is
recovered, and the higher the average clock quality is. On the other hand, the shorter the WTR
time is, the more likely the clock jitters are caused due to unstable clock signals. Therefore, do
not set the WTR time to 0 in actual application.
Related Information
None.
Issue 03 (2012-11-30)
1277
A List of Parameters
Values
Value Range
Default Value
l Normal
Normal
l Forced Switching
l Manual Switching
Description
Normal
Forced Switching
Manual Switching
Configuration Guidelines
None.
Related Information
None.
Issue 03 (2012-11-30)
1278
A List of Parameters
Values
Value Range
Default Value
l Lock
Unlock
l Unlock
Description
Lock
Unlock
Configuration Guidelines
None.
Related Information
None.
1279
A List of Parameters
Values
Value Range
Default Value
l 1-15
None
l None.
Configuration Guidelines
In actual application where the extended SSM protocol is enabled, the clock source IDs should
be set as required by network planning to ensure that the ID is network-wide unique.
Related Information
None.
Values
Value Range
Default Value
None
Issue 03 (2012-11-30)
1280
A List of Parameters
Value
Description
Configuration Guidelines
None.
Related Information
None.
Values
Value Range
Default Value
l SA4
SA4
l SA5
l SA6
l SA7
l SA8
Issue 03 (2012-11-30)
Value
Description
SA4
1281
A List of Parameters
Value
Description
SA5
SA6
SA7
SA8
Configuration Guidelines
The timeslot for receiving the S1 byte is set for external clock source only. The timeslot can be
set to SA4, SA5, SA6, SA7 or SA8. The default timeslot is SA6. In actual application, make
sure that the specified timeslot for receiving the S1 byte is the timeslot for carrying the SSM
information in the external clock source. That is, the specified timeslot for receiving the S1 byte
is the transmit timeslot of the opposite NE.
Related Information
None.
Issue 03 (2012-11-30)
1282
A List of Parameters
Values
Value Range
Default Value
l Synchronous Source
Unavailable
None
l Quality Unknown
l G.811 Reference Clock
l G.812 Transit Clock
l G.812 Local Clock
l SDH equipment timing
source (SETS) signal
Description
Synchronous Source
Unavailable
Quality Unknown
Configuration Guidelines
None.
1283
A List of Parameters
Related Information
None.
Values
Value Range
Default Value
l Normal Mode
l Holdover Mode
l Free-Run Mode
Description
Normal Mode
Indicates that the NE clock works in the tracing mode. That is, the
NE clock traces and locks the working mode of its upper-level
clock.
Holdover Mode
Indicates that the NE clock works in the holdover mode. That is,
in this mode, the NE clock uses the frequency information that is
stored before all timing reference signals are lost as its timing
reference.
Free-Run Mode
Indicates that the NE clock works in the free-run mode. That is,
the internal oscillator works in this mode when all external timing
reference signals are lost.
Configuration Guidelines
None.
1284
A List of Parameters
Related Information
None.
Values
Value Range
Default Value
l Automatic Extraction
Automatic Extraction
Description
Automatic Extraction
G.813 SDH Equipment Timing Indicates that the clock source quality is manually set to the
source (SETS) Signal
SETS clock signal.
Issue 03 (2012-11-30)
1285
A List of Parameters
Value
Description
Unknown Synchronization
Quality
Configuration Guidelines
This parameter is usually set to Automatic Extraction. When the equipment is interconnected
to an NE from another manufacturer that complies with a different protocol, the clock source
quality can be specified manually.
Related Information
None.
Values
Value Range
Default Value
Synchronous Source
Unavailable
Issue 03 (2012-11-30)
1286
A List of Parameters
Value
Description
Synchronous Source
Unavailable
Configuration Guidelines
None.
Related Information
None.
Values
Value Range
Default Value
1287
A List of Parameters
Value
Description
Indicates that only the port on the line board where both
physical board and logical board have been configured
properly can be used as the output port of SSM quality
information.
Indicates that only the port on the board with external clock
interfaces where both physical board and logical board have
been configured properly can be used as the output port of
SSM quality information.
Configuration Guidelines
None.
Related Information
The standard SSM protocol mode, also called as QL_ENABLE mode in the Recommendation
G.781, is a universal switching mode of clock source. In this standard SSM protocol mode, when
a clock source becomes invalid, the system automatically traces the clock source of the highest
quality in the clock priority table according to the quality information contained in the SSM
protocol. If two clock sources are both of the highest quality, the clock source of a higher priority
is selected.
Values
Value Range
Default Value
l Enable
Enable
l Disabled
Issue 03 (2012-11-30)
1288
A List of Parameters
Description
Enable
Disabled
Configuration Guidelines
The Control Status (Clock) parameter can be set to an enabled or disabled status as required.
The status is enabled by default. In actual application, if the output clock source on the line board
is valid, the transmission of the S1 byte through line port is allowed so that the clocks of the NEs
in the entire synchronous network are synchronous.
Related Information
The standard SSM protocol mode, also called QL_ENABLE mode in the G.781, is a universal
switching mode of the clock source. In this standard SSM protocol mode, when a clock source
becomes invalid, the system automatically traces the clock source of the highest quality in the
clock priority table according to the quality information contained in the SSM protocol. If two
clock sources are both of the highest quality, the clock source of a higher priority is selected.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
1289
A List of Parameters
Description
Indicates that only the port on the line board where both
physical board and logical board have been configured
properly can be used as the output port of the clock source ID.
Configuration Guidelines
None.
Related Information
None.
Values
Value Range
Default Value
l Enabled
Enabled
l Disabled.
1290
A List of Parameters
Value
Description
Enabled
Disabled.
Configuration Guidelines
The Enabled Status (Clock ID) parameter can be set to an enable of disabled status as required.
The status is enabled by default. In actual application, as long as the extended SSM protocol is
started and the output clock source on the line board is valid, it is allowed to output clock source
ID in the four most-significant bits of the S1 byte through line ports. This can ensure clock
synchronization among the NEs in the entire synchronous network and prevent the occurrence
of timing loops.
Related Information
None.
Values
Value Range
Default Value
1291
A List of Parameters
Value
Description
Configuration Guidelines
The Keep the Latest Data mode is a forced holdover mode. Therefore, the clock accuracy is not
high. In actual application, the Normal Data Output mode is recommended.
Related Information
None.
Issue 03 (2012-11-30)
1292
A List of Parameters
Values
Value Range
Default Value
Issue 03 (2012-11-30)
Value
Description
SETS Clock
1293
A List of Parameters
Value
Description
Configuration Guidelines
In actual application, this parameter can be set according to the quality information about specific
NE clocks.
Related Information
None.
Values
Value Range
Default Value
Normal
Issue 03 (2012-11-30)
Value
Description
Normal
1294
A List of Parameters
Value
Description
Configuration Guidelines
Select the proper clock according to the actual networking planning of the user.
This parameter is applicable to the PQ1 and PQM boards. When the PQM board functions as
the protection board, the function of setting the retiming mode is unavailable.
Values
Value Range
Default Value
Single-Ended Switching
Issue 03 (2012-11-30)
Value
Description
Single-Ended Switching
Dual-Ended Switching
1295
A List of Parameters
Configuration Guidelines
In the case of the 1+1 MSP, you can set this parameter to Single-Ended Switching or DualEnded Switching. In the case of the 1:N MSP, you can set this parameter to Dual-Ended
Switching only.
Values
Value Range
Default Value
EXC, SD
Description
TIM
EXC
SD
UNEQ
Configuration Guidelines
Set this parameter according to the actual requirement of the user.
1296
A List of Parameters
Values
Value Range
Default Value
Null
Description
Null
Virtual Concatenation Grouping
Configuration Guidelines
To bind SNCP pairs, the user needs to set this parameter to Virtual Concatenation
Grouping for these SNCP pairs.
Values
Value Range
Default Value
Selected, Deselected
Deselected
Issue 03 (2012-11-30)
1297
A List of Parameters
Value
Description
Selected
Deselected
Specifies an SNCP group that has the same source and same
sink.
Configuration Guidelines
When configuring SNCP for a tangent node on the SNCP ring, you can select the Configure
SNCP Tangent Ring check box.
Values
The value range varies according to the specified service type.
Service Type
Value Range
VC-4
VC-3
1 to 3
VC-12
1 to 63
Configuration Guidelines
Enter multiple consecutive timeslots in the format of Start timeslot number-End timeslot
number. Enter multiple inconsecutive timeslots in the format of ts1, ts2, .... You can enter the
timeslots in a combination of the two formats.
1298
A List of Parameters
Values
Valid Value
Default Value
Auto Switching
Description
Auto Switching
Idle
Unknown
Configuration Guidelines
None.
1299
A List of Parameters
Values
Value Range
Default Value
Normal
Description
Normal
Forced Switching(Protection
to Working)
Forced Switching(Working
to Protection)
Configuration Guidelines
None.
Issue 03 (2012-11-30)
1300
A List of Parameters
Values
Value Range
Default Value
Description
Forced Switching to
Working
Forced Switching to
Protection
Clear Switching
Configuration Guidelines
Set this parameter as required.
Issue 03 (2012-11-30)
1301
A List of Parameters
Values
Value Range
Default Value
Description
Lockout of Protection
Clear
Configuration Guidelines
Set this parameter as required.
Issue 03 (2012-11-30)
1302
A List of Parameters
Values
Value Range
Default Value
Transparent transmission,
Interconnection monitored,
Interconnection unmonitored
Transparent transmission
Description
Transparent transmission
Interconnection unmonitored
Interconnection monitored
Configuration Guidelines
When the service is of the E1 type, the setting of Interconnection unmonitored and
Interconnection monitored is not supported. When the service is of the T1 type, the setting of
Transparent transmission is not supported. When the service is changed from T1 to E1, the
value of this parameter is automatically changed to Transparent transmission.
This parameter is applicable to the PQM board.
Issue 03 (2012-11-30)
1303
A List of Parameters
Values
Value Range
Default Value
Unframed
Description
Unframed
D4
ESF
SLC96
F4
M13
Configuration Guidelines
In transparent transmission mode, this parameter is valid when the frame is set to be monitored
and the service is of the T1 type. In this case, the frame format is the format of the accessed
service. If you need not monitor the frame format when the frame format is monitored, set this
parameter to Unframed.
In interconnection unmonitored mode, this parameter is valid and used to define the frame format
of the output service.
In interconnection monitored mode, this parameter is valid. In this case, the port monitors the
accessed service and defines the frame format of the output service according to the frame format
specified by this parameter.
This parameter is applicable to the PQM board.
Issue 03 (2012-11-30)
1304
A List of Parameters
Values
Value Range
Default Value
Unframed
Issue 03 (2012-11-30)
Value
Description
Framed
Unframed
PCM30
PCM31
PCM30CRC
1305
A List of Parameters
Value
Description
PCM31CRC
Configuration Guidelines
In transparent transmission mode, this parameter is valid when the frame is set to be monitored
and the service is of the E1 type. In this case, the frame format is the format of the accessed
service. If you need to monitor the frame format when the frame format is not monitored, set
this parameter to Unframed.
In interconnection unmonitored mode, this parameter is valid and used to define the frame format
of the output service.
In interconnection monitored mode, this parameter is valid. In this case, the port monitors the
accessed service and defines the frame format of the output service according to the frame format
specified by this parameter.
This parameter is applicable to the PQM board.
Values
Value Range
Default Value
Normal Mode
Issue 03 (2012-11-30)
1306
A List of Parameters
Value
Description
Normal Mode
MUX Mode
Server Mode
Indicates that the tributary board does not drop any service, but
realizes the conversion between E1 and E3 signals only.
Configuration Guidelines
Select a proper value according to the actual requirement of the user.
This parameter is applicable to the N2PQ1, R2PD1, N2PQ3, N2PD3, N2PL3, and N2PL3A
boards.
Values
Value Range
Default Value
E13, M13
E13
Issue 03 (2012-11-30)
Value
Description
E13
M13
1307
A List of Parameters
Configuration Guidelines
Set this parameter to E13 when you need to convert E1 signals to E3 signals or convert E3 signals
to E1 signals. Set this parameter to M13 when you need to convert T1 signals to T3 signals or
convert T3 signals to T1 signals.
This parameter is applicable to the N2PQ1, N2PQ3, N2PL3, N2PL3A, N2PD3, and R2PD1
boards that are used on the OptiX NG-SDH equipment series.
Values
Value Range
Default Value
Unframe, Frame
Unframe
Description
Unframe
Indicates that the tributary board does not monitor the frame
format of the signal.
Frame
Configuration Guidelines
This parameter is applicable to the N2PQ1, N2PQ3, N2PL3, N2PL3A, N2PD3, and R2PD1
boards that are used on the OptiX NG-SDH equipment series.
Issue 03 (2012-11-30)
1308
A List of Parameters
Values
Value Range
Default Value
Common
Description
Common
Security SSL
Configuration Guidelines
If the communication environment between the NE and the U2000 is secure, the common mode
can be used. If the security requirement for the in-between communication is high, the security
SSL mode can be used to prevent packet interception.
Related Information
None.
1309
A List of Parameters
Values
Value Range
Default Value
Disabled, Enabled
Disabled
Description
Disabled
Enabled
Configuration Guidelines
Set this parameter according to the actual requirement of the user. If the TCM function is
required, set this parameter to Enabled.
Only the N2 board series in the SDH board category support the setting of this parameter because
only these boards support the TCM function.
Values
Issue 03 (2012-11-30)
Value Range
Default Value
A 16-byte string
1310
A List of Parameters
Configuration Guidelines
Set this parameter according to the actual requirement of the user. It is recommended that you
use the default value.
Values
Value Range
Default Value
Enabled, Disabled
Disabled
Description
Enabled
Disabled
Configuration Guidelines
If lower order cross-connection resources are insufficient, you can enable the higher order paththrough function to configure more lower order services.
Issue 03 (2012-11-30)
1311
B Glossary
Glossary
Numerics
1+1 backup
A backup method in which two components mirror each other. If the active component
goes down, the standby component takes over services from the active component to
ensure that the system service is not interrupted.
1:N protection
An architecture that has N normal service signals, N working SNCs/trails, and one
protection SNC/trail. It may have one extra service signal.
3G
3R
A
A/D
analog/digit
AAA
AAL
AAL2
AAL5
ABR
ACAP
ACH
ACL
ACL rule
ADM
add/drop multiplexer
AF
AGC
Issue 03 (2012-11-30)
1312
B Glossary
AIO
asynchronous input/output
AIS
AIS insertion
Insertion of AIS in a channel with excessive errors to indicate that it is unavailable. For
a line board, it can be set whether to insert AIS when there are excessive errors in the
B1, B2 and B3 bytes. For tributary board at the E1/T1 level, it can be set whether to
insert AIS when there are excessive errors in BIP-2. For tributary board at the E3 level
or higher, it can be set whether to insert AIS when there are excessive errors in the B3
byte.
ALS
AM
AMI
ANSI
APD
APID
APS
A protection architecture that comprises one protection facility and one working facility
and performs switchover by using the Automatic Protection Switching (APS) protocol.
Normally, signals are sent only over the working facility. If an APS switchover event is
detected by the working facility, services are switched over to the protection facility.
ARP
AS
ASCII
ASK
ATM
ATM Adaptation
Layer (AAL)
ATM protection group Logically bound ATM VP network/subnetwork connections that share the same physical
transmission channel. In the VP Group (VPG), a pair of VP connections (working
connection and its protective connection) is used for monitoring the automatic protection
switching, called monitoring connections (APS VPCs). If the monitoring connections
switch over, the whole VPG will switch over to quicken the ATM protection switching
(as quick as the protection switching of the SDH layer).
ATPC
AU
AUG
AWG
Address Resolution
Protocol (ARP)
An Internet Protocol used to map IP addresses to MAC addresses. It allows hosts and
routers to determine the link layer addresses through ARP requests and ARP responses.
Issue 03 (2012-11-30)
1313
B Glossary
American National
Standards Institute
(ANSI)
An organization that defines U.S standards for the information processing industry.
ANSI participates in defining network protocol standards.
Authentication,
Authorization and
Accounting (AAA)
access
A link between the customer and the telecommunication network. Many technologies,
such as the copper wire, optical fiber, mobile, microwave and satellite, are used for
access.
A list of entities, together with their access rights, which are authorized to have access
to a resource.
access layer
A layer that connects the end users (or last mile) to the ISP network. The access layer
devices are cost-effective and have high-density interfaces. In an actual network, the
access layer includes the devices and cables between the access points and the UPEs.
access point
Any entity that has station functionality and provides access to the distribution services,
via the wireless medium (WM) for associated stations.
accumulation
The sum of the service usage, consumption, and recharge fees of a subscriber.
active link
A link in the link aggregation group, which is connected to the active interface.
active mode
A working mode of EFM OAM. The discovery and remote loopback can only be initiated
by the interface in the active mode.
adaptive modulation
(AM)
A technology that is used to automatically adjust the modulation mode according to the
channel quality. When the channel quality is favorable, the equipment uses a highefficiency modulation mode to improve the transmission efficiency and the spectrum
utilization of the system. When the channel quality is degraded, the equipment uses the
low-efficiency modulation mode to improve the anti-interference capability of the link
that carries high-priority services.
adjacency
A portion of the local routing information which pertains to the reachability of a single
neighbor ES or IS over a single circuit. Adjacencies are used as input to the Decision
Process for forming paths through the routing domain. A separate adjacency is created
for each neighbor on a circuit, and for each level of routing (i.e. level 1 and level 2) on
a broadcast circuit.
adjacent channel
alternate polarization
(ACAP)
adjacent concatenation A situation where the virtual containers (VC) to carry concatenated services in SDH are
consecutive in terms of their service in the frame structures, so that they use the same
path overhead (POH).
administrative unit
(AU)
The information structure which provides adaptation between the higher order path layer
and the multiplex section layer. It consists of an information payload (the higher order
VC) and an AU pointer which indicates the offset of the payload frame start relative to
the multiplex section frame start.
administrative unit
group (AUG)
One or more administrative units occupying fixed, defined positions in an STM payload.
An AUG consists of AU-4s.
Issue 03 (2012-11-30)
1314
B Glossary
advanced ACL
An ACL that defines ACL rules based on the source addresses, target addresses, protocol
type, such as TCP source or target port, the type of the ICMP protocol, and message
codes.
aggregated link
aging time
air interface
The interface between the cellular phone set or wireless modem (usually portable or
mobile) and the active base station.
alarm
alarm box
A device that reflects the status of an alarm in visual-audio mode. The alarm box notifies
you of the alarm generation and alarm severity after it is connected to the Signaling
Network Manager server or client and the related parameters are set.
alarm cascading
alarm correlation
analysis
A process to analyze correlated alarms. For example, if alarm 2 is generated within five
seconds after alarm 1 is generated, and it complies with the conditions defined in the
alarm correlation analysis rule, you can either mask the alarm or raise the level of alarm
2 according to the behavior defined in the alarm correlation rule.
alarm filtering
An alarm management method. Alarms are detected and reported to the NMS system,
and whether the alarm information is displayed and saved is decided by the alarm filtering
status. An alarm with the filtering status set to "Filter" is not displayed and saved on the
NMS, but is monitored on the NE.
alarm indication
A function that indicates the alarm status of an NE. On the cabinet of an NE, there are
four indicators in different colors indicating the current alarm status of the NE. When
the green indicator is on, the NE is powered on. When the red indicator is on, a critical
alarm is generated. When the orange indicator is on, a major alarm is generated. When
the yellow indicator is on, a minor alarm is generated. The ALM alarm indicator on the
front panel of a board indicates the current status of the board.
A mode for an NE that indicates whether the port is automatically restored to the normal
status after the service is accessed or the fault is removed. There are three alarm inversion
modes: normal, revertible and non-revertible.
alarm notification
When an error occurs, the performance measurement system sends performance alarms
to the destination (for example, a file and/or fault management system) designated by
users.
alarm suppression
An alarm management method. Alarms that are set to be suppressed are not reported
from NEs any more.
alternate mark
inversion (AMI)
A synchronous clock encoding technique which uses bipolar pulses to represent logical
1 values.
analog signal
Issue 03 (2012-11-30)
1315
B Glossary
assured forwarding
(AF)
One of the four per-hop behaviors (PHB) defined by the Diff-Serv workgroup of IETF.
It is suitable for certain key data services that require assured bandwidth and short delay.
For traffic within the bandwidth limit, AF assures quality in forwarding. For traffic that
exceeds the bandwidth limit, AF degrades the service class and continues to forward the
traffic instead of discarding the packets.
attack
An attempt to bypass security controls in a system with the mission of using that system
or compromising it. An attack is usually accomplished by exploiting a current
vulnerability.
attenuation
attenuator
A device used to increase the attenuation of an Optical Fiber Link. Generally used to
ensure that the signal at the receive end is not too strong.
automatic laser
shutdown (ALS)
automatic transmit
A method of adjusting the transmit power based on fading of the transmit signal detected
power control (ATPC) at the receiver
autonomous system
(AS)
A network set that uses the same routing policy and is managed by the same technology
administration department. Each AS has a unique identifier that is an integer ranging
from 1 to 65535. The identifier is assigned by IANA. An AS can be divided into areas.
availability
A capability of providing services at any time. The probability of this capability is called
availability.
available bit rate (ABR) A kind of service categories defined by the ATM forum. ABR only provides possible
forwarding service and applies to the connections that does not require the real-time
quality. It does not provide any guarantee in terms of cell loss or delay.
avalanche photodiode
(APD)
average delay
A performance indicator indicating the average RTT of multiple ping operations or other
probe operations. It is expressed in milliseconds.
B
B-ISDN
BA
booster amplifier
BBE
BC
boundary clock
BCD
BDI
BDI packet
A packet used to notify the upstream LSR of the failure event which has occurred on the
downstream LSR through the reverse LSP. The BDI packet can be used in the 1:1/N
protective switchover service.
BE
BER
Issue 03 (2012-11-30)
1316
B Glossary
BFD
BGP
BIP
BIP-8
BIP-X
BITS
BMC
BNC
See bayonet-neill-concelman.
BPDU
BPS
BSC
BSS
BTS
BWS
Bidirectional
Forwarding Detection
(BFD)
A simple Hello protocol, similar to the adjacent detection in the route protocol. Two
systems periodically send BFD detection messages on the channel between the two
systems. If one system does not receive the detection message from the other system for
a long time, you can infer that the channel is faulty. Under some conditions, the TX and
RX rates between systems need to be negotiated to reduce traffic load.
backbone network
A network that forms the central interconnection for a connected network. The
communication backbone for a country is WAN. The backbone network is an important
architectural element for building enterprise networks. It provides a path for the exchange
of information between different LANs or subnetworks. A backbone can tie together
diverse networks in the same building, in different buildings in a campus environment,
or over wide areas. Generally, the backbone network's capacity is greater than the
networks connected to it.
backplane
An electronic circuit board containing circuits and sockets into which additional
electronic devices on other circuit boards or cards can be plugged.
backup
A periodic operation performed on the data stored in the database for the purposes of
database recovery in case that the database is faulty. The backup also refers to data
synchronization between active and standby boards.
backward defect
indication (BDI)
A function that the sink node of a LSP, when detecting a defect, uses to inform the
upstream end of the LSP of a downstream defect along the return path.
bandwidth
An area of radio coverage consisting of cells served by one or more Base Transceiver
Stations (BTSs) in the same base station site.
Issue 03 (2012-11-30)
1317
B Glossary
A logical entity that connects the BTS with the MSC in a GSM/CDMA network. It
interworks with the BTS through the Abis interface, the MSC through the A interface.
It provides the following functions: radio resource management, base station
management, power control, handover control, and traffic measurement. One BSC
controls and manages one or more BTSs in an actual network.
baseband
A form of modulation in which the information is applied directly onto the physical
transmission medium.
bayonet-neillconcelman (BNC)
bearer
An information transmission path with defined capacity, delay and bit error rate.
bearer network
A traditional IP packet transport service. In this service, the diagrams are forwarded
following the sequence of the time they reach. All diagrams share the bandwidth of the
network and routers. The amount of resource that a diagram can use depends of the time
it reaches. BE service does not ensure any improvement in delay time, jitter, packet loss
ratio, and high reliability.
best-effort service
A unitary and simple service model. Without being approved, but after notifying the
network, the application can send any number of packets at any time. The network tries
its best to send the packets, but delay and reliability cannot be ensured. Best-Effort is
the default service model of the Internet. It can be applied to various networks, such as
FTP and E-Mail. It is implemented through the First In First-Out (FIFO) queue.
bit error
A method of error monitoring. With even parity an X-bit code is generated by equipment
at the transmit end over a specified portion of the signal in such a manner that the first
bit of the code provides even parity over the first bit of all X-bit sequences in the covered
portion of the signal, the second bit provides even parity over the second bit of all X-bit
sequences within the specified portion, and so on. Even parity is generated by setting the
BIP-X bits so that there is an even number of 1s in each monitored partition of the signal.
A monitored partition comprises all bits which are in the same bit position within the Xbit sequences in the covered portion of the signal. The covered portion includes the BIPX.
bit interleaved parity-8 Consists of a parity byte calculated bit-wise across a large number of bytes in a
transmission transport frame. Divide a frame is into several blocks with 8 bits (one byte)
(BIP-8)
in a parity unit and then arrange the blocks in matrix. Compute the number of "1" or "0"
over each column. Then fill a 1 in the corresponding bit for the result if the number is
odd, otherwise fill a 0.
blacklist
A method of filtering packets based on their source IP addresses. Compared with ACL,
the match condition for the black list is much simpler. Therefore, the black list can filter
packets at a higher speed and can effectively screen the packet sent from the specific IP
address.
bound path
A parallel path with several serial paths bundled together. It improves the data throughput
capacity.
Issue 03 (2012-11-30)
1318
B Glossary
bridge
A device that connects two or more networks and forwards packets among them. Bridges
operate at the physical network level. Bridges differ from repeaters because bridges store
and forward complete packets, while repeaters forward all electrical signals. Bridges
differ from routers because bridges use physical addresses, while routers use IP
addresses.
The data messages that are exchanged across the switches within an extended LAN that
uses a spanning tree protocol (STP) topology. BPDU packets contain information on
ports, addresses, priorities and costs and ensure that the data ends up where it was
intended to go. BPDU messages are exchanged across bridges to detect loops in a
network topology. The loops are then removed by shutting down selected bridges
interfaces and placing redundant switch ports in a backup, or blocked, state.
bridging
The action of transmitting identical traffic on the working and protection channels
simultaneously.
broadband integrated A standard defined by the ITU-T to handle high-bandwidth applications, such as voice.
services digital network It currently uses the ATM technology to transmit data over SONNET-based circuits at
(B-ISDN)
155 to 622 Mbit/s or higher speed.
broadcast
broadcast address
A network address in computer networking that allows information to be sent to all nodes
on a network, rather than to a specific network host.
broadcast domain
A group of network stations that receives broadcast packets originating from any device
within the group. The broadcast domain also refers to the set of ports between which a
device forwards a multicast, broadcast, or unknown destination frame.
building integrated
timing supply (BITS)
In the situation of multiple synchronous nodes or communication devices, one can use
a device to set up a clock system on the hinge of telecom network to connect the
synchronous network as a whole, and provide satisfactory synchronous base signals to
the building integrated device. This device is called BITS.
built-in WDM
A function which integrates some simple WDM systems into products that belong to the
OSN series. That is, the OSN products can add or drop several wavelengths directly.
burst
A process of forming data into a block of the proper size, uninterruptedly sending the
block in a fast operation, waiting for a long time, and preparing for the next fast sending.
C
CAC
CAR
CAS multiframe
A multiframe set up based on timeslot 16. Each CAS multiframe contains 16 E1 PCM
frames. Among the 8 bits of timeslot 16 in the first frame, the first 4 bits are used for
multiframe synchronization. The multiframe alignment signal (MFAS) for
synchronization is 0000. The last 4 bits are used as the not multiframe alignment signal
(NMFAS). The NMFAS is XYXX. For the other 15 frames, timeslot 16 is used to
transmit exchange and multiplexing (E&M) signaling corresponding to each timeslot.
CBR
CBS
CC
Issue 03 (2012-11-30)
1319
B Glossary
CCDP
CCS
CDVT
CE
CES
CFM
CFR
CIR
CIST
CLEI
CLK
clock card
CLNP
CLP
CMI
CO
central office
CPU
CR
connection request
CRC
CRC-4 multiframe
A multiframe recommended by ITU-T G.704 and set up based on the first bit of timeslot
0. The CRC-4 multiframe is different from the CAS multiframe in principle and
implementation. Each CRC-4 multiframe contains 16 PCM frames. Each CRC-4
multiframe consists of two CRC-4 sub-multiframes. Each CRC-4 sub-multiframe is a
CRC-4 check block that contains 2048 (256 x 8) bits. Bits C1 to C4 of a check block can
check the previous check block.
CSA
CSES
CSF
CSMA/CD
CST
CTC
CV
connectivity verification
CV packet
A type of packet that is generated at the frequency of 1/s on the source end LSR of an
LSP, and is terminated on the destination end LSR of the LSP. A CV packet is transmitted
from the source end LSR to the destination LSR along the LSP. A CV packet contains
the unique identifier (TTSI) of the LSP so that all types of abnormalities on the path can
be detected.
CW
control word
CWDM
Issue 03 (2012-11-30)
1320
Common Channel
Signaling (CCS)
B Glossary
A signaling system used in telephone networks that separates signaling information from
user data. A specified channel is exclusively designated to carry signaling information
for all other channels in the system.
Common and Internal The single spanning tree jointly calculated by STP and RSTP, the logical connectivity
Spanning Tree (CIST) using MST bridges and regions, and MSTP. The CIST ensures that all LANs in the
bridged local area network are simply and fully connected.
Coordinated Universal The world-wide scientific standard of timekeeping. It is based upon carefully maintained
Time (UTC)
atomic clocks and is kept accurate to within microseconds worldwide.
cabinet
A transmitting data station that detects another signal while transmitting a frame,
stops transmitting that frame, transmits a jam signal, and then waits for a random
time interval before trying to send that frame again.
cell loss priority (CLP) A field in the ATM cell header that determines the probability of a cell being dropped
if the network becomes congested. Cells with CLP = 0 are insured traffic, which is
unlikely to be dropped. Cells with CLP = 1 are best-effort traffic, which might be
dropped.
central processing unit The computational and control unit of a computer. The CPU is the device that interprets
(CPU)
and executes instructions. The CPU has the ability to fetch, decode, and execute
instructions and to transfer information to and from other resources over the computer's
main data-transfer path, the bus.
centralized alarm
The alarms of all the hosts connecting to the Operation and Maintenance Unit (OMU).
channel
channel spacing
check criteria
A set of rules for checking and analyzing device echo information. The check criteria
for an alarm collection item need to be set through the configuration file.
circuit emulation
service (CES)
A function with which the E1/T1 data can be transmitted through ATM networks. At the
transmission end, the interface module packs timeslot data into ATM cells. These ATM
cells are sent to the reception end through the ATM network. At the reception end, the
interface module re-assigns the data in these ATM cells to E1/T1 timeslots. The CES
technology guarantees that the data in E1/T1 timeslots can be recovered to the original
sequence at the reception end.
Issue 03 (2012-11-30)
1321
B Glossary
clock selection
An algorithm used for selecting the best clock for clock synchronization. For different
peers (multiple servers or peers configured for a client), a peer sends clock
synchronization packets to each server or passive peer. After receiving the response
packets, it uses the clock selection algorithm to select the best clock.
clock source
clock synchronization
Also called frequency synchronization. The signal frequency traces the reference
frequency, but the start point does not need to be consistent.
clock tracing
The method to keep the time on each node synchronized with a clock source in a network.
co-channel dual
polarization (CCDP)
A channel configuration method, which uses a horizontal polarization wave and a vertical
polarization wave to transmit two signals. The Co-Channel Dual Polarization has twice
the transmission capacity of the single polarization.
coarse wavelength
division multiplexing
(CWDM)
collision
A condition in which two packets are being transmitted over a medium at the same time.
Their interference makes both unintelligible.
A parameter used to define the capacity of token bucket C, that is, the maximum burst
IP packet size when the information is transferred at the committed information rate.
This parameter must be larger than 0. It is recommended that this parameter should be
not less than the maximum length of the IP packet that might be forwarded.
common spanning tree A single spanning tree that connects all the MST regions in a network. Every MST region
(CST)
is considered as a switch; therefore, the CST can be considered as their spanning tree
generated with STP/RSTP.
composite service
conference
An IP multimedia session that have two or more participants. Each conference has a
focus and can be identified uniquely.
congestion
congestion
management
A flow control measure to solve the problem of network resource competition. When
the network congestion occurs, it places packets into the queue for buffer and determines
the packet forwarding order.
connection
connection admission
control (CAC)
A control process in which the network takes actions in the call set-up phase (or call renegotiation phase) to determine which connection request is admitted.
connection point
A reference point where the output of a trail termination source or a connection is bound
to the input of another connection, or where the output of a connection is bound to the
input of a trail termination sink or another connection. The connection point is
characterized by the information which passes across it. A bidirectional connection point
is formed by the association of a contradirectional pair.
connectionless
Pertaining to a method of data presentation. The data has a complete destination address
and is delivered by the network on a best-effort basis, independent of other data being
exchanged between the same pair of users.
Issue 03 (2012-11-30)
1322
B Glossary
constant bit rate (CBR) A kind of service categories defined by the ATM forum. CBR transfers cells based on
the constant bandwidth. It is applicable to service connections that depend on precise
clocking to ensure undistorted transmission.
container
Ethernet CFM can detect the connectivity between MEPs. The detection is achieved after
MEPs transmit Continuity Check Messages (CCMs) periodically.
control VLAN
control channel
The channel used to transmit digital control information from the base station to a cell
phone or vice-versa.
convergence layer
A "bridge" between the access layer and the core layer. The convergence layer provides
the convergence and forwarding functions for the access layer. It processes all the traffic
from the access layer devices, and provides the uplinks to the core layer. Compared with
the access layer, the convergence layer devices should have higher performance, fewer
interfaces and higher switching rate. In the real network, the convergence layer refers to
the network between UPEs and PE-AGGs.
cooling system
The system that controls or influences climate by decreasing the air temperature only.
core layer
A layer that functions as the backbone of high speed switching for networks and provides
high speed forwarding communications. It has a backbone transmission structure that
provides high reliability, high throughput, and low delay. The core layer devices must
have a good redundancy, error tolerance, manageability, adaptability, and they support
dual-system hot backup or load balancing technologies. In a real network, the core layer
includes the IP/MPLS backbone network consisting of NPEs and backbone routers.
correlation
corruption
The alteration of the information in IMS networks for the purpose of deception. For
example, attackers corrupt the correct charging information to evade being charged.
cross-connection
The connection of channels between the tributary board and the line board, or between
line boards inside the NE. Network services are realized through the cross-connections
of NEs.
crossover cable
A twisted pair patch cable wired in such a way as to route the transmit signals from one
piece of equipment to the receive signals of another piece of equipment, and vice versa.
crystal oscillator
A part of BGP/MPLS IP VPN model. It provides interfaces for direct connection to the
Service Provider (SP) network. A CE can be a router, switch, or host.
cutover
To migrate the data of an application system to another application system, which then
provides services.
Issue 03 (2012-11-30)
1323
cyclic redundancy
check (CRC)
B Glossary
A procedure used in checking for errors in data transmission. CRC error checking uses
a complex calculation to generate a number based on the data transmitted. The sending
device performs the calculation before transmission and includes it in the packet that it
sends to the receiving device. The receiving device repeats the same calculation after
transmission. If both devices obtain the same result, it is assumed that the transmission
was error free. The procedure is known as a redundancy check because each transmission
includes not only data but extra (redundant) error-checking values.
D
D/A
digital-analog converter
DB
database
DC
direct current
DC-C
DC-I
DC-return common
(with ground) (DC-C)
A power system, in which the BGND of the DC return conductor is short-circuited with
the PGND on the output side of the power supply cabinet and also on the line between
the output of the power supply cabinet and the electric equipment.
DC-return isolate (with A power system, in which the BGND of the DC return conductor is short-circuited with
ground) (DC-I)
the PGND on the output side of the power supply cabinet and is isolated from the PGND
on the line between the output of the power supply cabinet and the electric equipment.
DCC
DCE
DCF
DCM
DCN
DDF
DDN
DHCP
DLAG
DM
DNI
DRDB
DS interior node
DS node
A DS-compliant node, which is subdivided into DS boundary node and ID interior node.
DSCP
DSL
DSLAM
DSP
DTE
Issue 03 (2012-11-30)
1324
B Glossary
DTR
DVB
DVB-ASI
DVMRP
DWDM
Distance Vector
Multicast Routing
Protocol (DVMRP)
An Internet gateway protocol mainly based on the RIP. The protocol implements a typical
dense mode IP multicast solution. The DVMRP protocol uses IGMP to exchange routing
datagrams with its neighbors.
Dynamic Host
A client-server networking protocol. A DHCP server provides configuration parameters
Configuration Protocol specific to the DHCP client host requesting, generally, information required by the host
(DHCP)
to participate on the Internet network. DHCP also provides a mechanism for allocation
of IP addresses to hosts.
data backup
A method that is used to copy key data to the standby storage area, to prevent data loss
in the case of damage or failure in the original storage area.
data circuitThe equipment that provides the signal conversion and coding between the data terminal
terminating equipment equipment (DTE) and the line. A DCE is located at a data station. The DCE may be
(DCE)
separate equipment, or an integral part of the DTE or intermediate equipment. The DCE
may perform other functions that are normally performed at the network end of the line.
data communication
network (DCN)
data communications
channel (DCC)
The data channel that uses the D1D12 bytes in the overhead of an STM-N signal to
transmit information about operation, management, maintenance and provision
(OAM&P) between NEs. The DCC channels that are composed of bytes D1D3 are
referred to as the 192 kbit/s DCC-R channel. The other DCC channels that are composed
of bytes D4D12 are referred to as the 576 kbit/s DCC-M channel.
data flow
A process that involves processing the data extracted from the source system, such as
filtering, integration, calculation, and summary, finding and solving data inconsistency,
and deleting invalid data so that the processed data meets the requirements of the
destination system for the input data.
data mapping
An algorithm that is used to convert the data between heterogeneous data models.
data restoration
data terminal
equipment (DTE)
A user device composing the UNI. The DTE accesses the data network through the DCE
equipment (for example, a modem) and usually uses the clock signals produced by DCE.
datagram
A kind of protocol data unit (PDU) which is used in Connectionless Network Protocol
(CLNP), such as IP datagram, UDP datagram.
defect
delay measurement
(DM)
The time elapsed since the start of transmission of the first bit of the frame by a source
node until the reception of the last bit of the loopbacked frame by the same source node,
when the loopback is performed at the frame's destination node.
demodulation
In communications, the means by which a modem converts data from modulated carrier
frequencies (waves that have been modified in such a way that variations in amplitude
and frequency represent meaningful information) over a telephone line. Data is converted
to the digital form needed by a computer to which the modem is attached, with as little
distortion as possible.
Issue 03 (2012-11-30)
1325
B Glossary
dense wavelength
division multiplexing
(DWDM)
The technology that utilizes the characteristics of broad bandwidth and low attenuation
of single mode optical fiber, employs multiple wavelengths with specific frequency
spacing as carriers, and allows multiple channels to transmit simultaneously in the same
fiber.
designated port
A port defined in the STP protocol. On each switch that runs the STP protocol, the traffic
from the root bridge is forwarded to the designated port. The subnet connected to the
STP switch receives the data traffic from the root bridge. All the ports on the root bridge
are designated ports. On each subnet, there is only one designated port. When a network
topology is stable, only the root port and the designated port forward traffic. Other nondesignated ports are in the blocking state, and they receive STP packets, but does not
forward user traffic.
destruction
A process during which the information and resources in a network are changed
unexpectedly and the meanings of the information and resources are deleted or changed.
A high-quality data transport tunnel that combines the digital channel (such as fiber
channel, digital microwave channel, or satellite channel) and the cross multiplex
technology.
digital modulation
A method that controls the changes in amplitude, phase, and frequency of the carrier
based on the changes in the baseband digital signal. In this manner, the information can
be transmitted by the carrier.
digital network
digital signal
A technology for providing digital connections over the copper wire or the local
telephone network. DSL performs data communication over the POTS lines without
affecting the POTS service.
A network device, usually situated in the main office of a telephone company that
receives signals from multiple customer Digital Subscriber Line (DSL) connections and
puts the signals on a high-speed backbone line using multiplexing techniques.
dispersion
dispersion
compensation module
(DCM)
distributed link
aggregation group
(DLAG)
A board-level port protection technology used to detect unidirectional fiber cuts and to
negotiate with the opposite end. Once a link down failure occurs on a port or a hardware
failure occurs on a board, the services can automatically be switched to the slave board,
achieving 1+1 protection for the inter-board ports.
domain
A logical subscriber group based on which the subscriber rights are controlled.
Issue 03 (2012-11-30)
1326
B Glossary
dotted decimal notation A format of IP address. IP addresses in this format are separated into four parts by a dot
"." with each part is in the decimal numeral.
download
downstream
In an access network, the direction of transmission toward the subscriber end of the link.
dual-ended switching
A protection operation method which takes switching action at both ends of the protected
entity (for example, "connection", "path"), even in the case of a unidirectional failure.
dual-polarized antenna An antenna intended to simultaneously radiate or receive two independent radio waves
orthogonally polarized.
E
E-Aggr
E-LAN
E-Line
EA
encryption algorithm
EBS
ECC
EDFA
EEPROM
EF
EFCI
EFM
EFM OAM
EIA
EIR
EMC
EMI
EMS
EPD
EPL
EPLAN
ERPS
ESC
ESCON
ESD
electrostatic discharge
ESN
ETS
ETSI
Issue 03 (2012-11-30)
1327
B Glossary
EVC
EVPL
EVPLAN
EXP
Electronic Industries
Alliance (EIA)
EoD
Ethernet
A LAN technology that uses Carrier Sense Multiple Access/Collision Detection. The
speed of an Ethernet interface can be 10 Mbit/s, 100 Mbit/s, 1000 Mbit/s or 10000 Mbit/
s. An Ethernet network features high reliability and is easy to maintain.
Ethernet aggregation
(E-Aggr)
A type of boards. EoD boards bridge the PSN and TDM networks, enabling Ethernet
service transmission across PSN and TDM networks.
A type of Ethernet service provided by SDH, PDH, ATM, or MPLS networks. This
service is carried over a dedicated bridge and point-to-multipoint connections.
A type of Ethernet service that is provided with dedicated bandwidth and point-to-point
connections on an SDH, PDH, ATM, or MPLS server layer network.
Ethernet virtual
private LAN service
(EVPLAN)
A type of Ethernet service provided by SDH, PDH, ATM, or MPLS networks. This
service is carried over a shared bridge and point-to-multipoint connections.
Ethernet virtual
private line (EVPL)
A type of Ethernet service provided by SDH, PDH, ATM, or MPLS networks. This
service is carried over a shared bridge and point-to-point connections.
European
Telecommunications
Standards Institute
(ETSI)
A standards-setting body in Europe. Also the standards body responsible for GSM.
eSFP
egress
The egress LER. The group is transferred along the LSP consisting of a series of LSRs
after the group is labeled.
electric supervisory
channel (ESC)
A technology that implements communication among all the nodes and transmission of
monitoring data in an optical transmission network. The monitoring data of ESC is
introduced into DCC service overhead and is transmitted with service signals.
electrically erasable
A type of EPROM that can be erased with an electrical signal. It is useful for stable
programable read-only storage for long periods without electricity while still allowing reprograming. EEPROMs
memory (EEPROM)
contain less memory than RAM, take longer to reprogram, and can be reprogramed only
a limited number of times before wearing out.
Issue 03 (2012-11-30)
1328
B Glossary
electromagnetic
compatibility (EMC)
electromagnetic
interference (EMI)
embedded control
channel (ECC)
A logical channel that uses a data communications channel (DCC) as its physical layer,
to enable transmission of operation, administration, and maintenance (OAM)
information between NEs.
emergency
maintenance
A type of measure taken to quickly rectify an emergency fault to recover the proper
running of the related system or device and to reduce losses.
encapsulation
engineering label
enterprise system
connection (ESCON)
A path protocol which connects the host with various control units in a storage system.
It is a serial bit stream transmission protocol. The transmission rate is 200 Mbit/s.
entity
A part, device, subsystem, functional unit, equipment, or system that can be considered
individually.
equalization
equipment serial
number (ESN)
A string of characters that identify a piece of equipment and ensures correct allocation
of a license file to the specified equipment. It is also called "equipment fingerprint".
erbium-doped fiber
amplifier (EDFA)
An optical device that amplifies the optical signals. The device uses a short length of
optical fiber doped with the rare-earth element Erbium and the energy level jump of
Erbium ions activated by pump sources. When the amplifier passes the external light
source pump, it amplifies the optical signals in a specific wavelength range.
error tolerance
The ability of a system or component to continue normal operation despite the presence
of erroneous inputs.
event
Anything that takes place on the managed object. For example, the managed object is
added, deleted, or modified.
excess burst size (EBS) A parameter related to traffic. In the single rate three color marker (srTCM) mode, the
traffic control is achieved by the token buckets C and E. Excess burst size is a parameter
used to define the capacity of token bucket E, that is, the maximum burst IP packet size
when the information is transferred at the committed information rate. This parameter
must be larger than 0. It is recommended that this parameter should be not less than the
maximum length of the IP packet that might be forwarded.
excess information rate The bandwidth for excessive or burst traffic above the CIR; it equals the result of the
(EIR)
actual transmission rate without the safety rate.
exercise switching
Issue 03 (2012-11-30)
An operation to check whether the protection switching protocol functions properly. The
protection switching is not really performed.
1329
B Glossary
expedited forwarding
(EF)
The highest order QoS in the Diff-Serv network. EF PHB is suitable for services that
demand low packet loss ratio, short delay, and broad bandwidth. In all the cases, EF
traffic can guarantee a transmission rate equal to or faster than the set rate. The DSCP
value of EF PHB is "101110".
experimental bits
(EXP)
A field in the MPLS packet header, three bits long. This field is always used to identify
the CoS of the MPLS packet.
extended ID
The number of the subnet that an NE belongs to, for identifying different network
segments in a WAN. The physical ID of an NE is comprised of the NE ID and extended
ID.
external cable
The cables and optical fibers which are used for connecting electrical interfaces and
optical interfaces of one cabinet to interfaces of other cabinets or peripherals.
external links
The links between the current Web site and other Web sites. Generally, external links
refer to links from other Web sites to the current Web site.
extract
To read the data required by the destination system from the source system.
F
F1 byte
The user path byte, which is reserved for the user, but is typically special for network
providers. The F1 byte is mainly used to provide the temporary data or voice path for
special maintenance objectives. It belongs to the regenerator section overhead byte.
FC
FDB
flash database
FDD
FDDI
FDI
FDI packet
FDV
FE
FE port
FEC
FFD
FFD packet
A path failure detection method independent from CV. Different from a CV packet, the
frequency for generating FFD packets is configurable to satisfy different service
requirements. By default, the frequency is 20/s. An FFD packet contains information the
same as that in a CV packet. The destination end LSR processes FFD packets in the same
way for processing CV packets.
FICON
FIFO
FLR
FPGA
FPS
Issue 03 (2012-11-30)
1330
B Glossary
FR
FRU
FTN
FEC to NHLFE
FTP
Fiber Connect
(FICON)
A new generation connection protocol which connects the host to various control units.
It carries single byte command protocol through the physical path of fiber channel, and
provides higher rate and better performance than ESCON.
fairness
A feature in which for any link specified in a ring network, the source node is provided
with certain bandwidth capacities if the data packets transmitted by the source node are
constrained by the fairness algorithm.
Any network that supports transmission rate of 100 Mbit/s. The Fast Ethernet is 10 times
faster than 10BaseT, and inherits frame format, MAC addressing scheme, MTU, and so
on. Fast Ethernet is extended based on the IEEE802.3 standard, and it uses the following
three types of transmission media: 100BASE-T4 (4 pairs of phone twisted-pair cables),
100BASE-TX (2 pairs of data twisted-pair cables), and 100BASE-FX (2-core optical
fibers).
fast protection
switching (FPS)
A type of pseudo wire automatic protection switching (PW APS). When the working
PW is faulty, the source transmits services to the protection PW and the sink receives
the services from the protection PW. FPS generally works with the interworking function
(IWF) to provide end-to-end protection for services.
fault
A failure to implement the function while the specified operations are performed. A fault
does not involve the failure caused by preventive maintenance, insufficiency of external
resources or intentional settings.
fault alarm
A type of alarm caused by hardware and/or software faults, for example, board failure,
or by the exception that occurs in major functions. After handling, a fault alarm can be
cleared, upon which the NE reports a recovery alarm. Fault alarms are of higher severity
than event alarms.
fault detection
fault notification
A process wherein a fault is notified. For example, when a fault occurs on the local
interface, the local interface notifies the peer of the fault through OAMPDUs. The local
interface then records the fault in the log, and reports it to the NMS.
feeder
A high-speed transport technology used to build storage area networks (SANs). Fiber
channel can be on the networks carrying ATM and IP traffic. It is primarily used for
transporting SCSI traffic from servers to disk arrays. Fiber channel supports single-mode
and multi-mode fiber connections. Fiber channel signaling can run on both twisted pair
copper wires and coaxial cables. Fiber channel provides both connection-oriented and
connectionless services.
Issue 03 (2012-11-30)
1331
B Glossary
A standard developed by the American National Standards Institute (ANSI) for highspeed fiber-optic local area networks (LANs). FDDI provides specifications for
transmission rates of 100 megabits (100 million bits) per second on networks based on
the token ring network.
fiber trough
fiber/cable
General name of optical fiber and cable. It refers to the physical entities that connect the
transmission equipment, carry transmission objects (user information and network
management information) and perform the transmission function in the transmission
network. The optical fiber transmits optical signal, while the cable transmits electrical
signal. The fiber/cable between NEs represents the optical fiber connection or cable
connection between NEs. The fiber/cable between SDH NEs represents the connection
relationship between NEs. At this time, the fiber/cable is of optical fiber type.
field programmable
gate array (FPGA)
firewall
fixed bandwidth
The bandwidth that is fully reserved and is allocated periodically in a GPON system to
ensure the quality of cell transmission. If a T-CONT is provided with a fixed bandwidth
and does not transmit cells, the OLT can still allocate/assign the fixed bandwidth.
Therefore, idle cells are transmitted to the upstream OLT from the ONU/ONT.
flash memory
flooding
A type of incident, such as insertion of a large volume of data, that results in denial of
service.
flow
flow queue
The same type of services of a user is considered one service flow. HQoS performs queue
scheduling according to the services of each user. The service flows of each user are
classified into four FQs, namely, CS, EF, AF, and BE. CS is assigned a traffic shaping
percentage for Priority Queuing (PQ); EF, AF, and BE are assigned weights for Weighted
Fair Queuing (WFQ). The preceding two scheduling modes occupy a certain bandwidth
each; they can act at the same time without interfering each other.
forward defect
indication (FDI)
A packet generated and traced forward to the sink node of the LSP by the node that first
detects defects. It includes fields to indicate the nature of the defect and its location. Its
primary purpose is to suppress alarms being raised at affected higher level client LSPs
and (in turn) their client layers.
forward defect
A packet that responds to the detected failure event. It is used to suppress alarms of the
indication packet (FDI upper layer network where failure has occurred.
packet)
Issue 03 (2012-11-30)
1332
B Glossary
forward error
correction (FEC)
A bit error correction technology that adds the correction information to the payload at
the transmit end. Based on the correction information, the bit errors generated during
transmission are corrected at the receive end.
fragmentation
A process of breaking a packet into smaller units when transmitting over a network node
that does not support the original size of the packet.
A measurement of the variations in the frame delay between a pair of service frames,
where the service frames belong to the same CoS instance on a point to point ETH
connection.
frame loss ratio (FLR) A ratio, is expressed as a percentage, of the number of service frames not delivered
divided by the total number of service frames during time interval T, where the number
of service frames not delivered is the difference between the number of service frames
arriving at the ingress ETH flow point and the number of service frames delivered at the
egress ETH flow point in a point-to-point ETH connection.
frame relay (FR)
free-run mode
frequency division
duplex (FDD)
An application in which channels are divided by frequency. In an FDD system, the uplink
and downlink use different frequencies. Downlink data is sent through bursts. Both
uplink and downlink transmission use frames with fixed time length.
full rate
A type of data transmission rate. The service bandwidth can be 9.6 kbit/s, 4.8 kbit/s, or
2.4 kbit/s.
fully loaded
A state that indicates that all slots of a piece of equipment are in use, that is, the equipment
has no vacant slots.
fuse
A safety device that protects an electric circuit from excessive current, consisting of or
containing a metal element that melts when current exceeds a specific amperage, thereby
opening the circuit.
G
G-ACH
GAL
GCC
GCRA
GE
GFC
GFP
GNE
Issue 03 (2012-11-30)
1333
B Glossary
GPS
GRE
GSM
GTS
GUI
Generic Framing
Procedure (GFP)
A framing and encapsulation method which can be applied to any data type. It has been
standardized by ITU-T SG15.
Generic Routing
Encapsulation (GRE)
A mechanism for encapsulating any network layer protocol over any other network. GRE
is used for encapsulating IP datagrams tunneled through the Internet. GRE serves as a
Layer 3 tunneling protocol and provides a tunnel for transparently transmitting data
packets.
Global Positioning
System (GPS)
gain
The difference between the optical power from the input optical interface of the optical
amplifier and the optical power from the output optical interface of the jumper fiber,
which expressed in dB.
gateway
A device that connects two network segments using different protocols. It is used to
translate the data in the two network segments.
gateway network
element (GNE)
A network element that is used for communication between the NE application layer and
the NM application layer.
A flow control that is applicable to the A interface, C/D interface, and trunks and can be
achieved by integrating multiple function modules. It is adopted when the traffic is heavy,
or location update and authentication of multiple subscribers are performed after the
system restarts. It can efficiently prevent system breakdown caused by link congestion
or CPU overload.
generic traffic shaping A traffic control measure that proactively adjusts the output speed of the traffic. This is
(GTS)
to adapt the traffic to network resources that can be provided by the downstream router
to avoid packet discarding and congestion.
gigabit Ethernet (GE)
ground terminal
Issue 03 (2012-11-30)
1334
B Glossary
H
HCS
HD-SDI
HDB3
HDLC
HDTV
HEC
HPA
HPT
HQoS
HSDPA
HSI
high-speed Internet
hang up
A call processing mode used by an attendant to end the conversation with a user.
hardware loopback
A connection mode in which a fiber jumper is used to connect the input optical interface
to the output optical interface of a board to achieve signal loopback.
A field within the ATM frame whose purpose is to correct any single bit error in the cell
Header and also to detect any multi-bit errors. It actually performs a CRC check in the
first four header bits and also at the receiving end.
hello packet
The commonest packet which is periodically sent by a router to its neighbors. It contains
information about the DR, Backup Designated Router (BDR), known neighbors and
timer values.
hierarchical quality of
service (HQoS)
A type of QoS that controls the traffic of users and performs the scheduling according
to the priority of user services. HQoS has an advanced traffic statistics function, and the
administrator can monitor the usage of bandwidth of each service. Hence, the bandwidth
can be allocated reasonably through traffic analysis.
high definition
television (HDTV)
high definition-serial
digital interface signal
(HD-SDI)
historical performance The performance data that is stored in the history register or that is automatically reported
data
and stored on the NMS.
hop
Issue 03 (2012-11-30)
A network connection between two distant nodes. For Internet operation a hop represents
a small step on the route from one main computer to another.
1335
hot patch
B Glossary
A patch that is used to repair a deficiency in the software or add a new feature to a program
without restarting the software and interrupting the service. For the equipment using the
built-in system, a hot patch can be loaded, activated, confirmed, deactivated, deleted, or
queried.
I
IANA
IC
ICC
ICMP
ICP
IDU
IEEE
IETF
IF
IGMP
IGMP snooping
IGP
ILM
IMA
IMA frame
A control unit in the IMA protocol. It is a logical frame defined as M consecutive cells,
numbered 0 to M-l, transmitted on each of the N links in an IMA group.
IP
Internet Protocol
IP address
A 32-bit (4-byte) binary digit that uniquely identifies a host (computer) connected to the
Internet for communication with other hosts in the Internet by transferring packets. An
IP address is expressed in dotted decimal notation, consisting of decimal values of its 4
bytes, separated by periods (,), for example, 127.0.0.1. The first three bytes of an IP
address identify the network to which the host is connected, and the last byte identifies
the host itself.
IPA
IPTV
IPv4
IPv6
IS-IS
ISDN
ISO
ISP
Issue 03 (2012-11-30)
1336
B Glossary
IST
ITC
ITU
ITU-T
IWF
Interworking Function
Institute of Electrical
and Electronics
Engineers (IEEE)
A society of engineering and electronics professionals based in the United States but
boasting membership from numerous other countries. The IEEE focuses on electrical,
electronics, computer engineering, and science-related matters.
Interior Gateway
Protocol (IGP)
A routing protocol that is used within an autonomous system. The IGP runs in smallsized and medium-sized networks. The commonly used IGPs are the routing information
protocol (RIP), the interior gateway routing protocol (IGRP), the enhanced IGRP
(EIGRP), and the open shortest path first (OSPF).
Intermediate System to A protocol used by network devices (routers) to determine the best way to forward
Intermediate System
datagram or packets through a packet-based network.
(IS-IS)
International
Telecommunication
Union (ITU)
A United Nations agency, one of the most important and influential recommendation
bodies, responsible for recommending standards for telecommunication (ITU-T) and
radio networks (ITU-R).
International
Telecommunication
UnionTelecommunication
Standardization Sector
(ITU-T)
Internet Assigned
Numbers Authority
(IANA)
Internet Control
Message Protocol
(ICMP)
A network-layer (ISO/OSI level 3) Internet protocol that provides error correction and
other information relevant to IP packet processing. For example, it can let the IP software
on one machine inform another machine about an unreachable destination. See also
communications protocol, IP, ISO/OSI reference model, packet (definition 1).
Internet Engineering
Task Force (IETF)
Internet Group
Management Protocol
(IGMP)
One of the TCP/IP protocols for managing the membership of Internet Protocol multicast
groups. It is used by IP hosts and adjacent multicast routers to establish and maintain
multicast group memberships.
Issue 03 (2012-11-30)
1337
B Glossary
Internet Protocol
television (IPTV)
A system in which video is transmitted in IP packets. Also called "TV over IP", IPTV
uses streaming video techniques to deliver scheduled TV programs or video-on-demand
(VOD). Unlike transmitting over the air or through cable to a TV set, IPTV uses the
transport protocol of the Internet for delivery and requires either a computer and software
media player or an IPTV set-top box to decode the images in real time.
Internet Protocol
version 4 (IPv4)
The current version of the Internet Protocol (IP). IPv4 utilizes a 32bit address which is
assigned to hosts. An address belongs to one of five classes (A, B, C, D, or E) and is
written as 4 octets separated by periods and may range from 0.0.0.0 through to
255.255.255.255. Each IPv4 address consists of a network number, an optional
subnetwork number, and a host number. The network and subnetwork numbers together
are used for routing, and the host number is used to address an individual host within the
network or subnetwork.
Internet Protocol
version 6 (IPv6)
An update version of IPv4, which is designed by the Internet Engineering Task Force
(IETF) and is also called IP Next Generation (IPng). It is a new version of the Internet
Protocol. The difference between IPv6 and IPv4 is that an IPv4 address has 32 bits while
an IPv6 address has 128 bits.
Internet service
provider (ISP)
An organization that offers users access to the Internet and related services.
inbound
Data transmission from the external link to the router for the routers that support the
NetStream feature.
indicator
The maximum amplitude of sinusoidal jitter at a given jitter frequency, which, when
modulating the signal at an equipment input port, results in no more than two errored
seconds cumulative, where these errored seconds are integrated over successive 30second measurement intervals.
insertion loss
The loss of power that results from inserting a component, such as a connector, coupler,
or splice, into a previously continuous path.
A combination of inseparable associated circuit elements that are formed in place and
interconnected on or within a single base material to perform a microcircuit function.
intelligent power
adjustment (IPA)
A mechanism used to reduce the optical power of all the amplifiers in an adjacent
regeneration section in the upstream to a safety level if the system detects the loss of
optical signals on the link. If the fiber is broken, the device performance degrades, or the
connector is not plugged well, the loss of optical signals may occur. With IPA,
maintenance engineers will not be hurt by the laser sent out from the slice of broken
fiber.
interleaving
A process of systematically changing the bit sequence of a digital signal, usually as part
of the channel coding, in order to reduce the influence of error bursts that may occur
during transmission.
intermediate frequency The transitional frequency between the frequencies of a modulated signal and an RF
(IF)
signal.
inverse multiplexing
over ATM (IMA)
Issue 03 (2012-11-30)
1338
B Glossary
J
jitter
Short waveform variations caused by vibration, voltage fluctuations, and control system
instability.
jumper
K
K byte
L
L2 switching
L2VPN
LACP
LACPDU
LAG
LAN
LAPS
LB
See loopback.
LBM
LBR
LC
Lucent connector
LCAS
LCN
LCT
LDP
LED
LER
LIFO
LIU
LL
logical link
LLC
LLID
local loopback ID
LM
LOC
loss of continuity
LOM
loss of multiframe
LOP
loss of pointer
LOS
Issue 03 (2012-11-30)
1339
B Glossary
LP
LPA
LPF
LPT
LSP
LSR
LT
linktrace
LTM
LTR
LU
line unit
Layer 2 switching
Link Aggregation
Control Protocol
(LACP)
label
A short identifier that is of fixed length and local significance. It is used to uniquely
identify the FEC to which a packet belongs. It does not contain topology information. It
is carried in the header of a packet and does not contain topology information.
label distribution
Packets with the same destination address belong to an FEC. A label out of an MPLS
label resource pool is allocated to the FEC. LSRs record the relationship of the label and
the FEC. Then, LSRs sends a message and advertises to upstream LSRs about the label
and FEC relationship in message. The process is called label distribution.
label edge router (LER) A device that sits at the edge of an MPLS domain, that uses routing information to assign
labels to datagrams and then forwards them into the MPLS domain.
label space
Basic element of an MPLS network. All LSRs support the MPLS protocol. The LSR is
composed of two parts: control unit and forwarding unit. The former is responsible for
allocating the label, selecting the route, creating the label forwarding table, creating and
removing the label switch path; the latter forwards the labels according to groups
received in the label forwarding table.
laser
A component that generates directional optical waves of narrow wavelengths. The laser
light has better coherence than ordinary light. The fiber system takes the semi-conductor
laser as the light source.
last in first out (LIFO) A play mode of the voice mails, the last voice mail is played firstly.
layer
Issue 03 (2012-11-30)
1340
B Glossary
license
A permission that the vendor provides for the user with a specific function, capacity, and
duration of a product. A license can be a file or a serial number. Usually the license
consists of encrypted codes. The operation authority granted varies with the level of the
license.
A display and lighting technology used in almost every electrical and electronic product
on the market, to from a tiny on/off light to digital readouts, flashlights, traffic lights and
perimeter lighting. LEDs are also used as the light source in multimode fibers, optical
mice and laser-class printers.
line rate
The maximum packet forwarding capacity on a cable. The value of line rate equals the
maximum transmission rate capable on a given type of media.
linear MSP
link aggregation group An aggregation that allows one or more links to be aggregated together to form a link
(LAG)
aggregation group so that a MAC client can treat the link aggregation group as if it were
a single link.
link capacity
adjustment scheme
(LCAS)
LCAS in the virtual concatenation source and sink adaptation functions provides a
control mechanism to hitless increase or decrease the capacity of a link to meet the
bandwidth needs of the application. It also provides a means of removing member links
that have experienced failure. The LCAS assumes that in cases of capacity initiation,
increases or decreases, the construction or destruction of the end-to-end path is the
responsibility of the network and element management systems.
link monitoring
A mechanism for an interface to notify the peer of the fault when the interface detects
that the number of errored frames, errored codes, or errored frame seconds reaches or
exceeds the specified threshold.
link protection
Protection provided by the bypass tunnel for the link on the working tunnel. The link is
a downstream link adjacent to the point of local repair (PLR). When the PLR fails to
provide node protection, the link protection should be provided.
linktrace message
(LTM)
The message sent by the initiator MEP of 802.1ag MAC Trace to the destination MEP.
LTM includes the Time to Live (TTL) and the MAC address of the destination MEP2.
For 802.1ag MAC Trace, the destination MEP replies with a response message to the
source MEP after the destination MEP receives the LTM, and the response message is
called LTR. LTR also includes the TTL that equals the result of the TTL of LTM minus
1.
load balancing
The distribution of activity across two or more servers or components in order to avoid
overloading any one with too many requests or too much traffic.
load sharing
A device running mode. Two or more hardware units can averagely share the system
load based on their processing capabilities when they are operating normally. When a
hardware unit fails, the other units fulfill the tasks of the faulty unit on the precondition
for guaranteeing system performance, for example, few call loss.
loading
A process of importing information from the storage device to the memory to facilitate
processing (when the information is data) or execution (when the information is
program).
local MEP
Issue 03 (2012-11-30)
1341
B Glossary
A network formed by the computers and workstations within the coverage of a few square
kilometers or within a single building. It features high speed and low error rate. Ethernet,
FDDI, and Token Ring are three technologies used to implement a LAN. Current LANs
are generally based on switched Ethernet or Wi-Fi technology and running at 1,000 Mbit/
s (that is, 1 Gbit/s).
logical interface
An interface that does not exist physically and comes into being through configuration.
It can also exchange data.
According to the IEEE 802 family of standards, Logical Link Control (LLC) is the upper
sublayer of the OSI data link layer. The LLC is the same for the various physical media
(such as Ethernet, token ring, WLAN).
loopback (LB)
A troubleshooting technique that returns a transmitted signal to its source so that the
signal or message can be analyzed for errors. The loopback can be a inloop or outloop.
loopback message
(LBM)
The loopback packet sent by the node that supports 802.2ag MAC Ping to the destination
node. LBM message carries its own sending time.
A response message involved in the 802.2ag MAC Ping function, with which the
destination MEP replies to the source MEP after the destination MEP receives the LBM.
The LBR carries the sending time of LBM, the receiving time of LBM and the sending
time of LBR.
loss measurement (LM) A method used to collect counter values applicable for ingress and egress service frames
where the counters maintain a count of transmitted and received data frames between a
pair of MEPs.
loss of signal (LOS)
lower subrack
The subrack close to the bottom of the cabinet that contains several subracks.
lower threshold
A lower performance limit which when exceeded by a performance event counter will
trigger a threshold-crossing event.
M
MA
maintenance association
MAC
MAC address
A function that deletes MAC address entries of a device when no packets are received
from this device within a specified time period.
MADM
MAN
MBS
MCF
MCR
MD
MDP
Issue 03 (2012-11-30)
1342
B Glossary
ME
MEG
MEL
MEP
MFAS
MIP
MLD
MP
maintenance point
MPID
MPLS
MPLS TE
MPLS VPN
MPLS-TP
MS
multiplex section
MSA
MSB
MSOH
MSP
MST
MST region
MSTI
MSTP
MTBF
MTIE
MTTR
MTU
MUX
See multiplexer.
A protocol at the media access control sublayer. The protocol is at the lower part of the
data link layer in the OSI model and is mainly responsible for controlling and connecting
the physical media at the physical layer. When transmitting data, the MAC protocol
checks whether to be able to transmit data. If the data can be transmitted, certain control
information is added to the data, and then the data and the control information are
transmitted in a specified format to the physical layer. When receiving data, the MAC
protocol checks whether the information is correct and whether the data is transmitted
correctly. If the information is correct and the data is transmitted correctly, the control
information is removed from the data and then the data is transmitted to the LLC layer.
Multicast Routing
Protocol
A protocol used to set up and maintain multicast routes, and to correctly and effectively
forward multicast packets. The multicast route is used to set up a loop-free transmission
path from the source to multiple receivers, that is, the multicast distribution tree.
Issue 03 (2012-11-30)
1343
B Glossary
Multiple Spanning
Tree Protocol (MSTP)
A protocol that can be used in a loop network. Using an algorithm, the MSTP blocks
redundant paths so that the loop network can be trimmed as a tree network. In this case,
the proliferation and endless cycling of packets is avoided in the loop network. The
protocol that introduces the mapping between VLANs and multiple spanning trees. This
solves the problem that data cannot be normally forwarded in a VLAN because in STP/
RSTP, only one spanning tree corresponds to all the VLANs.
Multiple Spanning
Tree region (MST
region)
A region that consists of switches that support the MSTP in the LAN and links among
them. Switches physically and directly connected and configured with the same MST
region attributes belong to the same MST region.
Multiprotocol Label
Switching (MPLS)
A technology that uses short tags of fixed length to encapsulate packets in different link
layers, and provides connection-oriented switching for the network layer on the basis of
IP routing and control protocols. It improves the cost performance and expandability of
networks, and is beneficial to routing.
main topology
maintenance domain
(MD)
The network or the part of the network for which connectivity is managed by connectivity
fault management (CFM). The devices in a maintenance domain are managed by a single
Internet service provider (ISP).
maintenance entity
(ME)
An ME consists of a pair of maintenance entity group end points (MEPs), two ends of a
transport trail, and maintenance association intermediate points (MIPs) on the trail.
maintenance entity
An end point of a MEG, which is able to initialize and stop the transmission of OAM
group end point (MEP) data packets for fault management and performance monitoring.
maintenance entity
group intermediate
point (MIP)
An intermediate point in a MEG, which is able to forward OAM packets and respond to
some OAM packets, but unable to initiate the transmission of OAM packets or perform
any operations on network connections.
management
information
maximum transmission The largest packet of data that can be transmitted on a network. MTU size varies,
unit (MTU)
depending on the network576 bytes on X.25 networks, for example, 1500 bytes on
Ethernet, and 17,914 bytes on 16 Mbit/s token ring. Responsibility for determining the
size of the MTU lies with the link layer of the network. When packets are transmitted
across networks, the path MTU, or PMTU, represents the smallest packet size (the one
that all networks can transmit without breaking up the packet) among the networks
involved.
mean time between
failures (MTBF)
The average time that a device will take to recover from a failure.
measurement period
The interval for NEs to report measurement results to the Network Management System
(NMS).
medium
A physical medium for storing computer information. A medium is used for data
duplication and keeping the data for some time. Original data can be obtained from a
medium.
Issue 03 (2012-11-30)
1344
B Glossary
member
A basic element for forming a dimension according to the hierarchy of each level. Each
member represents a data element in a dimension. For example, January 1997 is a typical
member of the time dimension.
metropolitan area
network (MAN)
microwave
The portion of the electromagnetic spectrum with much longer wavelengths than infrared
radiation, typically above about 1 mm.
mirroring
The duplication of data for backup or to distribute network traffic among several
computers with identical data.
monitor link
monitoring
A method that an inspector uses to inspect a service agent. By monitoring a service agent,
an inspector can check each detailed operation performed by the service agent during
the conversation and operate the GUI used by the service agent. The inspector helps the
service agent to provide better service.
mounting
mounting ear
A piece of angle plate with holes in it on a rack. It is used to fix network elements or
components.
multicast
A process of transmitting data packets from one source to many destinations. The
destination address of the multicast packet uses Class D address, that is, the IP address
ranges from 224.0.0.0 to 239.255.255.255. Each multicast address represents a multicast
group rather than a host.
multicast listener
discovery (MLD)
A protocol used by an IPv6 router to discover the multicast listeners on their directly
connected network segments, and to set up and maintain member relationships. On IPv6
networks, after MLD is configured on the receiver hosts and the multicast router to which
the hosts are directly connected, the hosts can dynamically join related groups and the
multicast router can manage members on the local network.
multiframe alignment
signal (MFAS)
multiple spanning tree A type of spanning trees calculated by MSTP within an MST Region, to provide a simply
instance (MSTI)
and fully connected active topology for frames classified as belonging to a VLAN that
is mapped to the MSTI by the MST Configuration. A VLAN cannot be assigned to
multiple MSTIs.
multiplex section
protection (MSP)
A function, which is performed to provide capability for switching a signal between and
including two multiplex section termination (MST) functions, from a "working" to a
"protection" channel.
multiplex section
termination (MST)
A function, which is performed to generate the MSOH during the process of forming an
SDH frame signal and terminates the MSOH in the reverse direction.
multiplexer (MUX)
Issue 03 (2012-11-30)
1345
B Glossary
multiplexing
A procedure by which multiple lower order path layer signals are adapted into a higher
order path or the multiple higher order path layer signals are adapted into a multiplex
section.
multiprotocol label
switching virtual
private network
(MPLS VPN)
An Internet Protocol (IP) virtual private network (VPN) based on the multiprotocol label
switching (MPLS) technology. It applies the MPLS technology for network routers and
switches, simplifies the routing mode of core routers, and combines traditional routing
technology and label switching technology. It can be used to construct the broadband
Intranet and Extranet to meet various service requirements.
N
N+1 protection
A radio link protection system composed of N working channels and one protection
channel.
NAS
NC
NE ID
An ID that indicates a managed device in the network. In the network, each NE has a
unique NE ID.
NGN
NHLFE
NLP
NM
network management
NMC
NNI
network-to-network interface
NP
NPC
NPE
NRT-VBR
NRZ
non-return to zero
NRZ code
non-return-to-zero code
NRZI
NSAP
NSF
non-stop forwarding
NTP
A bottom-level device in the time synchronization network. An NTP client obtains time
from its upper-level NTP server without providing the time synchronization service.
Compared with the top-level NTP server, the middle-level NTP server sometimes is
called an NTP client.
network layer
Layer 3 of the seven-layer OSI model of computer networking. The network layer
provides routing and addressing so that two terminal systems are interconnected. In
addition, the network layer provides congestion control and traffic control. In the TCP/
IP protocol suite, the functions of the network layer are specified and implemented by
IP protocols. Therefore, the network layer is also called IP layer.
Issue 03 (2012-11-30)
1346
network parameter
control (NPC)
B Glossary
During communications, UPC is implemented to monitor the actual traffic on each virtual
circuit that is input to the network. Once the specified parameter is exceeded, measures
will be taken to control. NPC is similar to UPC in function. The difference is that the
incoming traffic monitoring function is divided into UPC and NPC according to their
positions. UPC locates at the user/network interface, while NPC at the network interface.
network processor (NP) An integrated circuit which has a feature set specifically targeted at the networking
application domain. Network Processors are typically software programmable devices
and would have generic characteristics similar to general purpose CPUs that are
commonly used in many different types of equipment and products.
network segment
A part of an Ethernet or other network, on which all message traffic is common to all
nodes, that is, it is broadcast from one node on the segment and received by all others.
network service
A service that needs to be enabled at the network layer and maintained as a basic service.
network service access A network address defined by ISO, through which entities on the network layer can
point (NSAP)
access OSI network services.
network storm
next generation
network (NGN)
noise figure
non-GNE
non-gateway network
element (non-GNE)
A network element that communicates with the NM application layer through the
gateway NE application layer.
O
O&M
OA
optical amplifier
OADM
OAM
OAMPDU
OAU
OC
ordinary clock
OCP
Issue 03 (2012-11-30)
1347
B Glossary
OCS
ODF
ODU
OFS
out-of-frame second
OHA
overhead access
OHP
overhead processing
OLT
ONU
OPEX
operating expense
OPU
OSC
OSI
OSN
OSNR
OSPF
OTDR
OTM
OTN
OTU
OTUk
A link-state, hierarchical interior gateway protocol (IGP) for network routing. Dijkstra's
algorithm is used to calculate the shortest path tree. It uses cost as its routing metric. A
link state database is constructed with the network topology which is identical on all
routers in the area.
offline
Pertaining to the disconnection between a device or a service unit and the system or the
network, or no running of a device and service unit.
online
A state indicating that a computer device or program is activated and is ready for
operations, and can communicate with a computer or can be controlled by the computer.
open systems
interconnection (OSI)
operation,
administration and
maintenance (OAM)
A group of network support functions that monitor and sustain segment operation,
support activities that are concerned with, but not limited to, failure detection,
notification, location, and repairs that are intended to eliminate faults and keep a segment
in an operational state, and support activities required to provide the services of a
subscriber access network to users/subscribers.
Issue 03 (2012-11-30)
1348
B Glossary
optical add/drop
multiplexer (OADM)
A device that can be used to add the optical signals of various wavelengths to one channel
and drop the optical signals of various wavelengths from one channel.
A board that is mainly responsible for amplifying optical signals. The OAU can be used
in both the transmitting direction and the receiving direction.
optical attenuator
A passive device that increases the attenuation in a fiber link. It is used to ensure that the
optical power of the signals received at the receive end is not extremely high. It is
available in two types: fixed attenuator and variable attenuator.
optical connector
optical fiber
optical interface
A form of Access Node that converts optical signals transmitted via fiber to electrical
signals that can be transmitted via coaxial cable or twisted pair copper wiring to
individual subscribers.
optical signal-to-noise
ratio (OSNR)
The ratio of signal power and noise power in a transmission link. OSNR is the most
important index of measuring the performance of a DWDM system. OSNR = signal
power/noise power.
optical splitter
A passive component, which is used for splitting and sending optical power to multiple
ONUs connected by an optical fiber. In a GPON system that consists of the OLT, ONU,
splitter, and optical fibers, according to the split ratio, the optical signal over the optical
fiber connected to the OLT is splitted into multiple channels of optical signals and send
each channel to each ONU. Split ratio determines how many channels of optical signals
an optical fiber can be split to.
optical supervisory
channel (OSC)
A device that sends a very short pulse of light down a fiber optic communication system
and measures the time history of the pulse reflection to measure the fiber length, the light
loss and locate the fiber fault.
optical transponder
unit (OTU)
A device or subsystem that converts the accessed client signals into the G.694.1/G.694.2compliant WDM wavelength.
orderwire
P
P2MP
point-to-multipoint
P2P
PA
power amplifier
PADR
PBS
Issue 03 (2012-11-30)
1349
B Glossary
PCB
PCM
PCR
PCS
PDH
PDU
PE
PGND cable
A cable which connects the equipment and the protection grounding bar. Usually, one
half of the cable is yellow, whereas the other half is green.
PHB
PIM-DM
PIM-SM
PKT
PLL
PM
performance monitoring
PMD
POH
path overhead
POS
PPD
PPI
PPP
Point-to-Point Protocol
PPPoE
PPS
PQ
PRBS
PRC
PSD
PSN
PSTN
PSU
PT
payload type
PTI
PTN
PTP
PVC
PVID
Issue 03 (2012-11-30)
1350
B Glossary
PVP
PW
PWE3
packet discarding
A function of discarding the packets from unknown VLAN domain or broadcast packets.
Packet Discarding is used to prevent the situation where unknown packets or broadcast
packets use the bandwidth on a link, improving the reliability of service transmission.
packet forwarding
packet loss
The discarding of data packets in a network when a device is overloaded and cannot
accept any incoming data at a given moment.
A MAN and WAN technology that provides point-to-point data connections. The POS
interface uses SDH/SONET as the physical layer protocol, and supports the transport of
packet data (such as IP packets) in MAN and WAN.
packet rate
The number of bits or bytes passed within a specified time. It is expressed in bits/s or
bytes/s.
packet switched
network (PSN)
packet switching
paired slots
Two slots of which the overheads can be passed through by using the bus on the
backplane.
parity bit
A check bit appended to an array of binary digits to make the sum of all the binary digits,
including the check bit, always odd or always even.
parity check
A method for character level error detection. An extra bit is added to a string of bits,
usually a 7-bit ASCII character, so that the total number of bits 1 is odd or even (odd or
even parity). Both ends of a data transmission must use the same parity. When the
transmitting device frames a character, it counts the numbers of 1s in the frame and
attaches the appropriate parity bit. The recipient counts the 1s and, if there is parity error,
may ask for the data to be retransmitted.
parts replacement
passive mode
A working mode of EFM OAM. An interface in the passive mode cannot initiate the
discovery and remote loopback.
patch
A parameter that is used to define the capacity of token bucket P, that is, the maximum
burst IP packet size when the information is transferred at the peak information rate. This
parameter must be larger than 0. It is recommended that PBS should be not less than the
maximum length of the IP packet that might be forwarded. See also CIR, CBS, and PIR.
peer
Issue 03 (2012-11-30)
1351
B Glossary
per-hop behavior
(PHB)
performance alarm
An alarm generated when the actual result of a measurement entity equals the predefined
logical expression for threshold or exceeds the predefined threshold.
performance
parameters
The performance parameters identify some indexes to scale the general performance of
the system. The indexes include the number of managed nodes, number of supported
clients and log database capacity. The parameters are sorted into static parameters,
dynamic parameters and networking bandwidth parameters.
performance register
The memory space for performance event counts, including 15-min current performance
register, 24-hour current performance register, 15-min historical performance register,
24-hour historical performance register, UAT register and CSES register. The object of
performance event monitoring is the board functional module, so every board functional
module has a performance register. A performance register is used to count the
performance events taking place within a period of operation time, so as to evaluate the
quality of operation from the angle of statistics.
performance threshold A limit for generating an alarm for a selected entity. When the measurement result
reaches or exceeds the preset alarm threshold, the performance management system
generates a performance alarm.
permanent virtual path Virtual path that consists of PVCs.
(PVP)
phase
phase-locked loop
(PLL)
A circuit that consists essentially of a phase detector which compares the frequency of
a voltage-controlled oscillator with that of an incoming carrier signal or referencefrequency generator; the output of the phase detector, after passing through a loop filter,
is fed back to the voltage-controlled oscillator to keep it exactly in phase with the
incoming or reference frequency.
physical layer
Layer 1 in the Open System Interconnection (OSI) architecture; the layer that provides
services to transmit bits or groups of bits over a transmission link between open systems
and which entails electrical, mechanical and handshaking.
physical link
The link between two physical network elements (NEs). When the user creates NEs or
refreshes the device status, the system automatically creates the physical link according
to the topology structure information on the device. The remark information of a physical
link can be modified, but the physical link cannot be deleted.
ping
A method used to test whether a device in the IP network is reachable according to the
sent ICMP Echo messages and received response messages.
ping test
A test that is performed to send a data packet to the target IP address (a unique IP address
on the device on the network) to check whether the target host exists according to the
data packet of the same size returned from the target host.
plesiochronous digital
hierarchy (PDH)
A multiplexing scheme of bit stuffing and byte interleaving. It multiplexes the minimum
rate 64 kit/s into the 2 Mbit/s, 34 Mbit/s, 140 Mbit/s, and 565 Mbit/s rates.
point-to-point service
(P2P)
A service between two terminal users. In P2P services, senders and recipients are
terminal users.
Issue 03 (2012-11-30)
1352
B Glossary
pointer
An indicator whose value defines the frame offset of a virtual container with respect to
the frame reference of the transport entity on which this pointer is supported.
polarization
A kind of electromagnetic wave, the direction of whose electric field vector is fixed or
rotates regularly. Specifically, if the electric field vector of the electromagnetic wave is
perpendicular to the plane of horizon, this electromagnetic wave is called vertically
polarized wave; if the electric field vector of the electromagnetic wave is parallel to the
plane of horizon, this electromagnetic wave is called horizontal polarized wave; if the
tip of the electric field vector, at a fixed point in space, describes a circle, this
electromagnetic wave is called circularly polarized wave.
policy
A set of rules that are applied when the conditions for triggering an event are met.
policy template
A template that is used to define the calculation rules of a charging event, for example,
rating, debiting and accumulating. A policy template may contain the parameters to be
instantiated. They can be used when the attributes of the condition judgment, calculation
method, and action functions are carried out.
polling
A mechanism for the NMS to query the agent status and other data on a regular basis.
A default VLAN ID of a port. It is allocated to a data frame if the data frame carries no
VLAN tag when reaching the port.
port priority
The priority that is used when a port attaches tags to Layer 2 packets. Packets received
on ports with higher priorities are forwarded preferentially.
power adjustment
A method for dynamically and properly assigning power according to the real-time status
of a wireless network. When an AP runs under an AC for the first time, the AP uses its
maximum transmit power. When getting reports from its neighbors (that is, other APs
that are detected by the AP and managed by the same AC), the AP determines to increase
or decrease its power according to the report conclusion.
power box
A direct current power distribution box at the upper part of a cabinet, which supplies
power for the subracks in the cabinet.
power control
A process in which the MS or BS uses certain rules to adjust and control the transmit
power according to the change in the channel condition and the power of the received
signal.
power off
power on
power spectrum
density (PSD)
Issue 03 (2012-11-30)
1353
B Glossary
private line
A line, such as a subscriber cable and trunk cable, which are leased by the
telecommunication carrier and are used to meet the special user requirements.
protection path
A device that is located in the backbone network of the MPLS VPN structure. A PE is
responsible for managing VPN users, establishing LSPs between PEs, and exchanging
routing information between sites of the same VPN. A PE performs the mapping and
forwarding of packets between the private network and the public channel. A PE can be
a UPE, an SPE, or an NPE.
pseudo random binary A sequence that is random in a sense that the value of an element is independent of the
sequence (PRBS)
values of any of the other elements, similar to real random sequences.
pseudo wire (PW)
An emulated connection between two PEs for transmitting frames. The PW is established
and maintained by PEs through signaling protocols. The status information of a PW is
maintained by the two end PEs of a PW.
pseudo wire emulation An end-to-end Layer 2 transmission technology. It emulates the essential attributes of a
edge-to-edge (PWE3) telecommunication service such as ATM, FR or Ethernet in a packet switched network
(PSN). PWE3 also emulates the essential attributes of low speed time division
multiplexing (TDM) circuit and SONET/SDH. The simulation approximates to the real
situation.
public switched
telephone network
(PSTN)
pulse
A variation above or below a normal level and a given duration in electrical energy.
pulse code modulation A method of encoding information in a signal by changing the amplitude of pulses.
(PCM)
Unlike pulse amplitude modulation (PAM), in which pulse amplitude can change
continuously, pulse code modulation limits pulse amplitudes to several predefined
values. Because the signal is discrete, or digital, rather than analog, pulse code
modulation is more immune to noise than PAM.
Q
QA
Q adaptation
QAM
QPSK
QinQ
A layer 2 tunnel protocol based on IEEE 802.1Q encapsulation. It add a public VLAN
tag to a frame with a private VLAN tag to allow the frame with double VLAN tags to
be transmitted over the service provider's backbone network based on the public VLAN
tag. This provides a layer 2 VPN tunnel for customers and enables transparent
transmission of packets over private VLANs.
QoS
quadrature amplitude
modulation (QAM)
Both an analog and a digital modulation scheme. It conveys two analog message signals,
or two digital bit streams, by changing (modulating) the amplitudes of two carrier waves,
using the amplitude-shift keying (ASK) digital modulation scheme or amplitude
modulation (AM) analog modulation scheme. These two waves, usually sinusoids, are
out of phase with each other by 90 and are thus called quadrature carriers or quadrature
components hence the name of the scheme.
Issue 03 (2012-11-30)
1354
B Glossary
quadrature phase shift A modulation method of data transmission through the conversion or modulation and
keying (QPSK)
the phase determination of the reference signals (carrier). It is also called the fourth period
or 4-phase PSK or 4-PSK. QPSK uses four dots in the star diagram. The four dots are
evenly distributed on a circle. On these phases, each QPSK character can perform twobit coding and display the codes in Gray code on graph with the minimum BER.
quality of service (QoS) A commonly-used performance indicator of a telecommunication system or channel.
Depending on the specific system and service, it may relate to jitter, delay, packet loss
ratio, bit error ratio, and signal-to-noise ratio. It functions to measure the quality of the
transmission system and the effectiveness of the services, as well as the capability of a
service provider to meet the demands of users.
R
RADIUS
RAI
RDI
RED
REG
See regenerator.
REI
RF
RIP
RMEP
RNC
ROPA
RP
rendezvous point
RPR
RS232
A asynchronous transfer mode that does not involve hand-shaking signal. It can
communicate with RS232 and RS422 of other stations in point-to-point mode and the
transmission is transparent. Its highest speed is 19.2kbit/s.
RS422
The specification that defines the electrical characteristics of balanced voltage digital
interface circuits. The interface can change to RS232 via the hardware jumper and others
are the same as RS232.
RSL
RSOH
RSSI
RST
RSTP
RTN
RTP
Issue 03 (2012-11-30)
1355
B Glossary
An evolution of the Spanning Tree Protocol, providing for faster spanning tree
convergence after a topology change. The RSTP protocol is backward compatible with
the STP protocol.
Real-Time Transport
Protocol (RTP)
A type of host-to-host protocol used in real-time multimedia services such as Voice over
IP (VoIP) and video.
Remote Authentication A security service that authenticates and authorizes dial-up users and is a centralized
Dial-In User Service
access control mechanism. RADIUS uses the User Datagram Protocol (UDP) as its
(RADIUS)
transmission protocol to ensure real-time quality. RADIUS also supports the
retransmission and multi-server mechanisms to ensure good reliability.
RoHS
Routing Information
Protocol (RIP)
A simple routing protocol that is part of the TCP/IP protocol suite. It determines a route
based on the smallest hop count between source and destination. RIP is a distance vector
protocol that routinely broadcasts routing information to its neighboring routers and is
known to waste bandwidth.
radio network
controller (RNC)
A piece of equipment in the RNS which is in charge of controlling the use and the integrity
of the radio resources.
radio propagation
model
random early detection A packet loss algorithm used in congestion avoidance. It discards the packet according
(RED)
to the specified higher limit and lower limit of a queue so that global TCP synchronization
resulting from traditional tail drop can be prevented.
rate limiting
A traffic management technology used to limit the total rate of packet sending on a
physical interface or a Tunnel interface. Rate limiting is directly enabled on the interface
to control the traffic passing the interface.
reboot
To start the system again. Programs or data will be reloaded to all boards.
received signal strength The received wide band power, including thermal noise and noise generated in the
indicator (RSSI)
receiver, within the bandwidth defined by the receiver pulse shaping filter, for TDD
within a specified timeslot. The reference point for the measurement shall be the antenna
receiver sensitivity
recognition
Issue 03 (2012-11-30)
1356
B Glossary
reference clock
A kind of stable and high-precision autonous clock providing frequencies for other clocks
for reference.
reflectance
The ratio of the reflected optical power to the incident optical power.
regeneration
The process of receiving and reconstructing a digital signal so that the amplitudes,
waveforms and timing of its signal elements are constrained within specified limits.
regenerator (REG)
relay
An electronic control device that has a control system and a system to be controlled. The
relay of the telepresence system is used to control the power of telepresence equipment
and is controlled by the telepresence host.
remote optical
pumping amplifier
(ROPA)
A remote optical amplifier subsystem designed for applications where power supply and
monitoring systems are unavailable. The ROPA subsystem is a power compensation
solution to the ultra-long distance long hop (LHP) transmission.
reservation
An action that the charging module performs to freeze a subscriber's balance amount,
free resources, credits, or quotas before the subscriber uses services. This action ensures
that the subscriber has sufficient balance to pay for services.
resistance
The ability to impede (resist) the flow of electric current. With the exception of
superconductors, all substances have a greater or lesser degree of resistance. Substances
with very low resistance, such as metals, conduct electricity well and are called
conductors. Substances with very high resistance, such as glass and rubber, conduct
electricity poorly and are called nonconductors or insulators.
resource sharing
response
A message that is returned to the requester to notify the requester of the status of the
request packet.
robustness
The ability of a system to maintain function even with changes in internal structure or
external environment.
rollback
root alarm
An alarm directly caused by anomaly events or faults in the network. Some lower-level
alarms always accompany a root alarm.
route
The path that network traffic takes from its source to its destination. In a TCP/IP network,
each IP packet is routed independently. Routes can change dynamically.
router
A device on the network layer that selects routes in the network. The router selects the
optimal route according to the destination address of the received packet through a
network and forwards the packet to the next router. The last router is responsible for
sending the packet to the destination host. Can be used to connect a LAN to a LAN, a
WAN to a WAN, or a LAN to the Internet.
routing
The determination of a path that a data unit (frame, packet, message) traverses from
source to destination.
routing protocol
A formula used by routers to determine the appropriate path onto which data should be
forwarded.
rt-VBR
Issue 03 (2012-11-30)
1357
B Glossary
S1 byte
SAN
SAToP
SC
square connector
SCR
SD
SD trigger flag
A signal degrade trigger flag that determines whether to perform a switching when SD
occurs. The SD trigger flag can be set by using the network management system.
SD-SDI
SDH
SDP
SDRAM
SELV
SEMF
SES
SETS
SF
SFP
SFTP
SHDSL
SMSR
SNC
subnetwork connection
SNCMP
SNCP
SNCTP
SNMP
SNR
SOH
section overhead
SONET
Issue 03 (2012-11-30)
1358
B Glossary
SPE
SSL
SSM
SSMB
SSU
STD
STP
SVC
A security protocol that works at a socket level. This layer exists between the TCP layer
and the application layer to encrypt/decode data and authenticate concerned entities.
Simple Network
Management Protocol
(SNMP)
A network management protocol of TCP/IP. It enables remote users to view and modify
the management information of a network element. This protocol ensures the
transmission of management information between any two points. The polling
mechanism is adopted to provide basic function sets. According to SNMP, agents, which
can be hardware as well as software, can monitor the activities of various devices on the
network and report these activities to the network console workstation. Control
information about each device is maintained by a management information block.
Synchronization Status A message that carries quality levels of timing signals on a synchronous timing link.
Nodes on an SDH network and a synchronization network acquire upstream clock
Message (SSM)
information through this message. Then the nodes can perform proper operations on their
clocks, such as tracing, switching, or converting to holdoff, and forward the
synchronization information to downstream nodes.
security
Protection of a computer system and its data from harm or loss. A major focus of
computer security, especially on systems accessed by many people or through
communication lines, is preventing system access by unauthorized individuals.
security service
self-healing
serial port
An input/output location (channel) that sends and receives data to and from a computer's
CPU or a communications device one bit at a time. Serial ports are used for serial data
communication and as interfaces with some peripheral devices, such as mice and printers.
service flow
service level
service protection
A measure that ensures that services can be received at the receive end.
Issue 03 (2012-11-30)
1359
B Glossary
session
A logical connection between two nodes on a network for the exchange of data. It
generally can apply to any link between any two data devices. A session is also used
simply to describe the connection time.
shaping
A signal indicating that associated data has degraded in the sense that a degraded defect
condition is active.
A signal indicating that associated data has failed in the sense that a near-end defect
condition (non-degrade defect) is active.
signal-to-noise ratio
(SNR)
The ratio of the amplitude of the desired signal to the amplitude of noise signals at a
given point in time. SNR is expressed as 10 times the logarithm of the power ratio and
is usually expressed in dB (Decibel).
signaling
single-ended switching A protection operation method that takes switching action only at the affected end of the
protected entity (for example, trail, subnetwork connection), in the case of a
unidirectional failure.
single-pair high-speed
digital subscriber line
(SHDSL)
A symmetric digital subscriber line technology developed from HDSL, SDSL, and
HDSL2, which is defined in ITU-T G.991.2. The SHDSL port is connected to the user
terminal through the plain telephone subscriber line and uses trellis coded pulse
amplitude modulation (TC-PAM) technology to transmit high-speed data and provide
the broadband access service.
single-polarized
antenna
An antenna intended to radiate or receive radio waves with only one specified
polarization.
slicing
smooth upgrade
span
The physical reach between two pieces of WDM equipment. The number of spans
determines the signal transmission distance supported by a piece of equipment and varies
according to transmission system type.
static ARP
A protocol that binds some IP addresses to a specified gateway. The packet of these IP
addresses must be forwarded through this gateway.
static route
A route that cannot adapt to the change of network topology. Operators must configure
it manually. When a network topology is simple, the network can work in the normal
state if only the static route is configured. It can improve network performance and ensure
bandwidth for important applications. Its disadvantage is as follows: When a network is
faulty or the topology changes, the static route does not change automatically. It must
be changed by the operators.
statistical multiplexing A multiplexing technique whereby information from multiple logical channels can be
transmitted across a single physical channel. It dynamically allocates bandwidth only to
active input channels, to make better use of available bandwidth and allow more devices
to be connected than with other multiplexing techniques.
Issue 03 (2012-11-30)
1360
B Glossary
steering
A protection switching mode defined in ITU-T G.8132, which is applicable to packetbased T-MPLS ring networks and similar to SDH transoceanic multiplex section
protection (MSP). In this mode, the switching is triggered by the source and sink nodes
of a service.
stress
The force, or combination of forces, which produces a strain; force exerted in any
direction or manner between contiguous bodies, or parts of bodies, and taking specific
names according to its direction, or mode of action, as thrust or pressure, pull or tension,
shear or tangential stress.
subnet
A type of smaller networks that form a larger network according to a rule, for example,
according to different districts. This facilitates the management of the large network.
subnet mask
The technique used by the IP protocol to determine which network segment packets are
destined for. The subnet mask is a binary pattern that is stored in the client machine,
server or router matches with the IP address.
superstratum provider Core devices that are located within a VPLS full-meshed network. The UPE devices that
edge (SPE)
are connected with the SPE devices are similar to the CE devices. The PWs set up
between the UPE devices and the SPE devices serve as the ACs of the SPE devices. The
SPE devices must learn the MAC addresses of all the sites on UPE side and those of the
UPE interfaces that are connected with the SPE. SPE is sometimes called NPE.
suppress
To forbid the printing of the paper bill of an account that meets certain conditions during
the bill run.
suspension
A specific state in the life cycle of a subscriber. A subscriber in this state can neither
make calls nor receive calls.
switching capacity
switching priority
A priority of a board that is defined for protection switching. When several protected
boards need to be switched, a switching priority should be set for each board. If the
switching priorities of the boards are the same, services on the board that fails later cannot
be switched. Services on the board with higher priority can preempt the switching
resources of that with lower priority.
synchronous digital
hierarchy (SDH)
A transmission scheme that follows ITU-T G.707, G.708, and G.709. It defines the
transmission features of digital signals such as frame structure, multiplexing mode,
transmission rate level, and interface code. SDH is an important part of ISDN and BISDN. It interleaves the bytes of low-speed signals to multiplex the signals to high-speed
counterparts, and the line coding of scrambling is used only for signals. SDH is suitable
for the fiber communication system with high speed and a large capacity since it uses
synchronous multiplexing and flexible mapping structure.
synchronous dynamic A new type of DRAM that can run at much higher clock speeds than conventional
random access memory memory. SDRAM actually synchronizes itself with the CPU's bus and is capable of
running at 100 MHz, about three times faster than conventional FPM RAM, and about
(SDRAM)
twice as fast as EDO DRAM or BEDO DRAM. SDRAM is replacing EDO DRAM in
computers.
synchronous optical
network (SONET)
Issue 03 (2012-11-30)
1361
B Glossary
T
TCA
TCI
TCM
TCN
TCP
TCP/IP
TDC
TDM
TE
terminal equipment
TFTP
TIM
TLV
See type-length-value.
TM
TMN
TOD
time of day
TPID
TPS
TPS protection
The equipment level protection that uses one standby tributary board to protect N
tributary boards. When a fault occurs on the working board, the SCC issues the switching
command, and the payload of the working board can be automatically switched over to
the specified protection board and the protection board takes over as the working board.
After the fault is rectified, the service is automatically switched to the original board.
TSD
TTI
TTL
TTSI
TU
tributary unit
TU-LOP
TUG
Tc
Telnet
A standard terminal emulation protocol in the TCP/IP protocol stack. Telnet allows users
to log in to remote systems and use resources as if they were connected to a local system.
Telnet is defined in RFC 854.
ToS
type of service
Issue 03 (2012-11-30)
1362
B Glossary
Transmission Control
Protocol (TCP)
The protocol within TCP/IP that governs the breakup of data messages into packets to
be sent using Internet Protocol (IP), and the reassembly and verification of the complete
messages from packets received by IP. A connection-oriented, reliable protocol (reliable
in the sense of ensuring error-free delivery), TCP corresponds to the transport layer in
the ISO/OSI reference model.
A small and simple alternative to FTP for transferring files. TFTP is intended for
applications that do not need complex interactions between the client and server. TFTP
restricts operations to simple file transfers and does not provide authentication. TFTP is
small enough to be contained in ROM to be used for bootstrapping diskless machines.
tail drop
A congestion management mechanism, in which packets arrive later are discarded when
the queue is full. This policy of discarding packets may result in network-wide
synchronization due to the TCP slow startup mechanism.
tangent ring
A concept borrowed from geometry. Two tangent rings have a common node between
them. The common node often leads to single-point failures.
telecommunications
management network
(TMN)
terminal multiplexer
(TM)
A device used at a network terminal to multiplex multiple channels of low rate signals
into one channel of high rate signals, or to demultiplex one channel of high rate signals
into multiple channels of low rate signals.
threshold
An amount, limit or level on a scale. Changes will occur with a threshold reached.
threshold alarm
The alarm occurs when the monitored value exceeds the threshold.
threshold crossing
alarm (TCA)
throughput
The maximum transmission rate of the tested object (system, equipment, connection,
service type) when no packet is discarded. Throughput can be measured with bandwidth.
throughput capability
time division
multiplexing (TDM)
A multiplexing technology. TDM divides the sampling cycle of a channel into time slots
(TSn, n=0, 1, 2, 3), and the sampling value codes of multiple signals engross time slots
in a certain order, forming multiple multiplexing digital signals to be transmitted over
one channel.
A technique used in best-effort delivery systems to prevent packets that loop endlessly.
The TTL is set by the sender to the maximum time the packet is allowed to be in the
network. Each router in the network decrements the TTL value when the packet arrives,
and discards any packet if the TTL counter reaches zero.
timer
Symbolic representation for a timer object (for example, a timer object may have a
primitive designated as T-Start Request). Various MAC entities utilize timer entities that
provide triggers for certain MAC state transitions.
timestamp
The current time of an event that is recorded by a computer. By using mechanisms such
as the Network Time Protocol (NTP), a computer maintains accurate current time,
calibrated to minute fractions of a second.
Issue 03 (2012-11-30)
1363
B Glossary
token bucket algorithm The token bucket is a container for tokens. The capacity of a token bucket is limited, and
the number of tokens determines the traffic rate of permitted packets. The token bucket
polices the traffic. Users place the tokens into the bucket regularly according to the preset
rate. If the tokens in the bucket exceed the capacity, no tokens can be put in. Packets can
be forwarded when the bucket has tokens, otherwise they cannot be transferred till there
are new tokens in the bucket. This scheme adjusts the rate of packet input.
topology
topology discovery
trTCM
traceroute
A program that prints the path to a destination. Traceroute sends a sequence of datagrams
with the time-to-live (TTL) set to 1,2, and so on, and uses ICMP time exceeded messages
that return to determine routers along the path.
traffic
The product of the number of calls made and received and the average duration of each
call in a measurement period.
traffic classification
A function that enables you to classify traffic into different classes with different
priorities according to some criteria. Each class of traffic has a specified QoS in the entire
network. In this way, different traffic packets can be treated differently.
traffic policy
A full set of QoS policies formed by association of traffic classification and QoS actions.
traffic shaping
A way of controlling the network traffic from a computer to optimize or guarantee the
performance and minimize the delay. It actively adjusts the output speed of traffic in the
scenario that the traffic matches network resources provided by the lower layer devices,
avoiding packet loss and congestion.
traffic statistics
trail management
function
A network level management function of the network management system. This function
enables you to configure end-to-end services, view graphic interface and visual routes
of a trail, query detailed information of a trail, filter, search and locate a trail quickly,
manage and maintain trails in a centralized manner, manage alarms and performance
data by trail, and print a trail report.
trail termination source A TTSI uniquely identifies an LSP in the network. A TTSI is carried in the connectivity
identifier (TTSI)
verification (CV) packet for checking the connectivity of a trail. If it matches the TTSI
received by the sink point, the trail has no connectivity defect.
transaction
A business between a carrier and customer, such as payment and account adjustment.
transfer
transit
A packet is transmitted along an LSP consisting of a series of LSRs after the packet is
labeled. The intermediate nodes are named transits.
transit node
transmission delay
The period from the time when a site starts to transmit a data frame to the time when the
site finishes the data frame transmission. It consists of the transmission latency and the
equipment forwarding latency.
Issue 03 (2012-11-30)
1364
B Glossary
transmit power control A technical mechanism used within some networking devices in order to prevent too
much unwanted interference between different wireless networks.
transparent
transmission
A process during which the signaling protocol or data is not processed in the content but
encapsulated in the format for the processing of the next phase.
tray
A component that can be installed in the cabinet for holding chassis or other devices.
tributary loopback
A fault can be located for each service path by performing loopback to each path of the
tributary board. There are three kinds of loopback modes: no loopback, outloop, and
inloop.
tributary protection
switching (TPS)
trunk
Physical communications line between two offices. It transports media signals such as
speech, data and video signals.
trunk link
trunk port
A switch port used to connect to other switches. The trunk port can connect to only the
trunk link. Only VLANs allowed to pass through a trunk port can be configured on the
trunk port.
tunnel
A channel on the packet switching network that transmits service traffic between PEs.
In VPN, a tunnel is an information transmission channel between two entities. The tunnel
ensures secure and transparent transmission of VPN information. In most cases, a tunnel
is an MPLS tunnel.
tunnel ID
A group of information, including the token, slot number of an outgoing interface, tunnel
type, and location method.
A type of cable that consists of two independently insulated wires twisted around one
another for the purposes of canceling out electromagnetic interference which can cause
crosstalk. The number of twists per meter makes up part of the specifications for a given
type of cable. The greater the number of twists is, the more crosstalk is reduced.
An algorithm that meters an IP packet stream and marks its packets based on two rates,
Peak Information Rate (PIR) and Committed Information Rate (CIR), and their
associated burst sizes to be either green, yellow, or red. A packet is marked red if it
exceeds the PIR. Otherwise it is marked either yellow or green depending on whether it
exceeds or does not exceed the CIR.
type-length-value
(TLV)
An encoding type that features high efficiency and expansibility. It is also called CodeLength-Value (CLV). T indicates that different types can be defined through different
values. L indicates the total length of the value field. V indicates the actual data of the
TLV and is most important. TLV encoding features high expansibility. New TLVs can
be added to support new features, which is flexible in describing information loaded in
packets.
U
UART
UAS
unavailable second
Issue 03 (2012-11-30)
1365
B Glossary
UAT
UBR
UBR+
UDP
UNI
UPC
UPE
UPI
UPM
UPS
UTC
User Datagram
Protocol (UDP)
A TCP/IP standard protocol that allows an application program on one device to send a
datagram to an application program on another. User Datagram Protocol (UDP) uses IP
to deliver datagram. UDP provides application programs with the unreliable
connectionless packet delivery service. There is a possibility that UDP messages will be
lost, duplicated, delayed, or delivered out of order. The destination device does not
confirm whether a data packet is received.
unavailable time event An event that is reported when the monitored object generates 10 consecutive severely
(UAT)
errored seconds (SES) and the SESs begin to be included in the unavailable time. The
event will end when the bit error ratio per second is better than 10-3 within 10 consecutive
seconds.
unicast
unknown multicast
packet
A packet for which no forwarding entry is found in the multicast forwarding table.
uplink
A transmission channel through which radio signals or other signals are transmitted to
the central office.
uplink tunnel
upload
upper limit
The maximum consumption amount that a carrier sets for a subscriber in a bill cycle. If
the consumption amount if a subscriber exceeds the maximum consumption amount that
the carrier sets, the OCS still deducts the maximum consumption amount that the carrier
sets.
upstream
In an access network, the direction that is far from the subscriber end of the link.
upstream board
A board that provides the upstream transmission function. Through an upstream board,
services can be transmitted upstream to the upper-layer device.
usage parameter
control (UPC)
During communications, UPC is implemented to monitor the actual traffic on each virtual
circuit that is input to the network. Once the specified parameter is exceeded, measures
will be taken to control. NPC is similar to UPC in function. The difference is that the
incoming traffic monitoring function is divided into UPC and NPC according to their
positions. UPC locates at the user/network interface, while NPC at the network interface.
Issue 03 (2012-11-30)
1366
user-to-network
interface (UNI)
B Glossary
The interface between user equipment and private or public network equipment (for
example, ATM switches).
V
V-NNI
V-UNI
V.24
The physical layer interface specification between DTE and DCE defined by the ITUT. It complies with EIA/TIA-232.
VAS
VB
virtual bridge
VBR
VC trunk
VCC
VCCV
VCG
VCI
VCTRUNK
A virtual concatenation group applied in data service mapping, also called the internal
port of a data service processing board.
VIP
VLAN
VLAN mapping
A technology that enables user packets to be transmitted over the public network by
translating private VLAN tags into public VLAN tags. When user packets arrive at the
destination private network, VLAN mapping translates public VLAN tags back into
private VLAN tags. In this manner, user packets are correctly transmitted to the
destination.
One of the properties of the MST region, which describes mappings between VLANs
and spanning tree instances.
VLAN stacking
A technology that adds a VLAN tag to each incoming packet. The VLAN stacking
technology implements transparent transmission of C-VLANs in the ISP network to
realize the application of Layer 2 Virtual Private Network (VPN).
VP
VPI
VPLS
VPN
VRRP
VSI
Issue 03 (2012-11-30)
1367
B Glossary
Virtual Router
Redundancy Protocol
(VRRP)
A protocol used for multicast or multicast LANs such as an Ethernet. A group of routers
(including an active router and several backup routers) in a LAN is regarded as a virtual
router, which is called a backup group. The virtual router has its own IP address. The
host in the network communicates with other networks through this virtual router. If the
active router in the backup group fails, one of the backup routers in this backup group
becomes active and provides routing service for the host in the network.
VoIP
value-added service
(VAS)
A service provided by carriers and service providers (SPs) together for subscribers based
on voice, data, images, SMS messages, and so on. Communication network technologies,
computer technologies, and Internet technologies are used to provide value-added
services.
variable bit rate (VBR) One of the traffic classes used by ATM (Asynchronous Transfer Mode). Unlike a
permanent CBR (Constant Bit Rate) channel, a VBR data stream varies in bandwidth
and is better suited to non real time transfers than to real-time streams such as voice calls.
virtual channel
connection (VCC)
A VC logical trail that carries data between two end points in an ATM network. A pointto-multipoint VCC is a set of ATM virtual connections between two or multiple end
points.
virtual circuit
virtual concatenation
group (VCG)
A group of co-located member trail termination functions that are connected to the same
virtual concatenation link
virtual container trunk The logical path formed by some cascaded VCs.
(VC trunk)
virtual fiber
The fiber that is created between different devices. A virtual fiber represents the optical
path that bears SDH services in a WDM system.
A bundle of virtual channels, all of which are switched transparently across an ATM
network based on a common VPI.
The field in the Asynchronous Transfer Mode (ATM) cell header that identifies to which
virtual path the cell belongs.
A type of point-to-multipoint L2VPN service provided over the public network. VPLS
enables geographically isolated user sites to communicate with each other through the
MAN/WAN as if they are on the same LAN.
virtual user-network
interface (V-UNI)
An IP telephony term for a set of facilities used to manage the delivery of voice
information over the Internet. VoIP involves sending voice information in a digital form
in discrete packets rather than by using the traditional circuit-committed protocols of the
public switched telephone network (PSTN).
voltage drop
The voltage developed across a component or conductor by the flow of current through
the resistance or impedance of that component or conductor.
W
WAN
Issue 03 (2012-11-30)
1368
B Glossary
WCDMA
WDM
WFQ
WLAN
WRED
WRR
WTR
Web LCT
Wideband Code
Division Multiple
Access (WCDMA)
A standard defined by the ITU-T for the third-generation wireless technology derived
from the Code Division Multiple Access (CDMA) technology.
The number of minutes to wait before services are switched back to the working line.
wavelength
The distance between successive peaks or troughs in a traveling wave, that is, the distance
over which a wave is transmitted within a vibration period.
wavelength protection
group
Data for describing the wavelength protection structure. Its function is similar to that of
the protection subnet for SDH NEs. The wavelength path protection can work only with
the correct configuration of the wavelength protection group.
weighted random early A packet loss algorithm used for congestion avoidance. It can prevent the global TCP
detection (WRED)
synchronization caused by traditional tail-drop. WRED is favorable for the high-priority
packet when calculating the packet loss ratio.
wide area network
(WAN)
A network composed of computers which are far away from each other which are
physically connected through specific protocols. WAN covers a broad area, such as a
province, a state or even a country.
A hybrid of the computer network and the wireless communication technology. It uses
wireless multiple address channels as transmission media and carriers out data interaction
through electromagnetic wave to implement the functions of the traditional LAN.
working path
working service
wrapping
A protection switching mode defined in ITU-T G.8132, which is applicable to packetbased T-MPLS ring networks and similar to SDH two-fiber bidirectional multiplex
section protection (MSP). In this mode, the switching is triggered by the node that detects
a failure. For details, see ITU-T G.841.
X
X.21
Issue 03 (2012-11-30)
ITU-T standard for serial communications over synchronous digital lines. It is mainly
used in Europe and Japan.
1369
X.25
B Glossary
A data link layer protocol. It defines the communication in the Public Data Network
(PDN) between a host and a remote terminal.
Y
Y.1731
Issue 03 (2012-11-30)
The OAM protocol introduced by the ITU-T. Besides the contents defined by
IEEE802.1ag, ITU-T Recommendation Y.173 also defines the following combined
OAM messages: Alarm Indication Signal (AIS), Remote Defect Indication (RDI),
Locked Signal (LCK), Test Signal, Automatic Protection Switching (APS), Maintenance
Communication Channel (MCC), Experimental (EXP), and Vendor Specific (VSP) for
fault management and performance monitoring, such as frame loss measurement (LM),
and delay measurement (DM).
1370