Escolar Documentos
Profissional Documentos
Cultura Documentos
I N F O R M A T I O N
G U I D E
Abstract
Network capacity and redundancy are key aspects for providers delivering customer services; both
can be enhanced using Link Aggregation Groups (LAGs)1. LAGs provide redundancy and extra
capacity for point-to-point connections between two systems; they combine multiple links into
a group and represent the group as a single bundle. LAGs provide extra capacity, Nx number of
links, and redundancy; they remain active with reduced capacity, even if some composite links fail.
Multi-chassis LAGs are used between systems and are combined with virtual private local area
network service (VPLS) point-to-multipoint, and Ethernet virtual leased line (VLL) point-to-point
services allowing providers to deliver highly redundant services to their customers.
Table of Contents
1
Introduction
17
Conclusion
Introduction
This document describes the expanding application of Link Aggregation Groups (LAGs); it covers
the basic properties of a LAG and the unique evolution of LAG technology on Alcatel-Lucents
service router product set (the Alcatel-Lucent 7450 Ethernet Service Switch [ESS], the AlcatelLucent 7750 Service Router [SR] and the Alcatel-Lucent 7710 Service Router [SR]), specifically:
Redundant hot-standby links to access devices using LAG sub-groups
Multi-chassis LAGs (MC-LAGs)
MC-LAGs with virtual private local area network service (VPLS)
MC-LAGs with redundant Ethernet virtual leased line (VLL) pseudowires
7450 ESS
ESS-1
7750 SR
7710 SR
LAGs
64
200
64
200
64
LACP ports
256
256
256
256
256
To keep traffic flows in sequence, traffic is distributed over the links in the LAG using a hashing
algorithm. This is based on the Internet Protocol (IP) or MAC header for VPLS unicast traffic, and
the IP header for routed IP unicast/multicast traffic. For VPLS broadcast/unknown/multicast on service
access points (SAPs) the service and SAP identifiers (IDs) are used, whereas on Service Distribution
Points (SDPs), the distribution is the same as for VPLS unicast traffic. VLL traffic distribution is based
on the service ID and the full label stack is used for multiprotocol label switching (MPLS) traffic
(along with the incoming port and system IP address, but excluding the EXP/time to live [TTL]).
The service router implementation can configure a minimum number of links that need to be active
for the LAG to be considered up, and with dynamic open shortest path first (OSPF), it can configure
link costs relating to the number of active links.
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
Figure 1 shows a simple configuration of a LAG without LACP (left) and a LAG with LACP (right).
Figure 2 shows a simple example of two devices communicating through two links that are interconnected using a LAG.
Figure 1. LAG without LACP and LAG with LACP
lag 1
port 1/1/1
port 1/1/3
no shutdown
exit
port 1/1/1
ethernet
autonegotiate limited
exit
no shutdown
exit
port 1/1/3
ethernet
autonegotiate limited
exit
no shutdown
exit
lag 1
port 1/1/1
port 1/1/3
lacp active administrative-key 32768
no shutdown
exit
port 1/1/1
ethernet
autonegotiate limited
exit
no shutdown
exit
port 1/1/3
ethernet
autonegotiate limited
exit
no shutdown
exit
Port 1/1/1
Port 1/1/3
Port 1/1/3
LAG 1
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
When multiple sub-groups have been configured, the system needs to choose which subgroup should be
active. The selection criteria are:
a) Highest link count The sub-group with the most eligible links; the most links that could possibly
become active (local port state is up/linkup).
b) Highest weight The sub-group with the highest aggregate weight; the aggregate weight is
calculated as weight = (65535-Priorityi) for all eligible ports in the sub-group (local port state
is up/linkup).
If the count/weight criteria are equal for the groups involved, the first group up is active and the
connectivity is not revertive after multiple failures.
Figure 3 shows a LAG configuration where one port is automatically configured based on the IOM
and the other port is statically configured into sub-group 2.
To allow interoperability with systems that do not support the processing of the out_of_sync message, it is possible to disable sending such messages. In this case, there are no LACP messages sent
on non-active sub-groups, removing these sub-groups from the LAG. The disadvantage of this is
that it takes longer to make this sub-group active after a failure as the LACP state needs to be
renegotiated from scratch.
Two other features are the ability to specify slave-to-partner (links that are set in standby state by
the remote system are not eligible when using the highest-count algorithm) and hold-time (delays the
reporting to higher layers when the LAG is down); these features allow LACP processing to occur
where out_of_sync messages are not used.
Figure 4 shows output relevant to the LAG and port status information corresponding to the
configuration shown in Figure 3. Notice that port 1/1/1 is active, in sync and up, while 1/1/3
is standby, not in sync and linkup.
Figure 5 shows a typical application of LAG sub-groups, with a DSLAM connected to a service
router by a LAG with one active and one standby sub-group, each connected to different IOMs.
This not only provides capacity and redundancy in case of link/IOM/MDA failure, but also allows
shaping/policing to be applied to the DSLAM traffic.
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
DSLAM
LAG 1
Standby
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
Port 1/1/1
Port 1/1/2
IOM-1
Port 2/1/1
Port 2/1/2
IOM-2
Provider
Network
Active
LAG
Standard
LAG
LAG
LAG
Standby
MC-LAG
Multi-chassis LAG
Control Protocol
MC-LAG
In the basic properties of a LAG, each system is identified by the concatenation of its MAC address
and system priority. To allow a LAG to be established to different remote systems, these systems
must appear to be a single system by presenting the same system MAC and priority. The concept
of sub-groups is used again within the MC-LAG to have some links active and some standby,
where a sub-group in an MC-LAG can only contain links on the same system.
The systems that are part of the MC-LAG exchange availability and status information through
an MC-LAG Control Protocol, which runs between peers making up the separate parts of the
MC-LAG. The protocol uses User Datagram Protocol (UDP) packets (destination port 1025) and
can use MD-5 authentication. It is used as a keep-alive to ensure the peer system is active and
to synchronize information between the chassis involved. The synchronization covers:
The basic LAG parameters; if the system MAC address and/or priority for the MC-LAG are
not configured consistently on the peers, the whole MC-LAG remains in standby state
Sub-group weights and link counts to allow the peers to choose a single active sub-group
The IP path between the MC-LAG peers should be redundant; otherwise if the MC-LAG protocol
fails, both peers would activate a local group. The protocol packets are marked QoS forwarding class
NC1 and should be prioritized appropriately on intermediate routers between the peers to avoid loss
during congestion. If a peer is found, the local sub-group is only activated after the MC-LAG has
synchronized between the peers.
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
The LAG portion of the configuration is unchanged from a standard LAG. Note that MC-LAGs
are only supported on Ethernet interfaces, which together with the LAG, must be in access mode.
The information about the MC-LAG is configured under the redundancy tree. A peer is configured
by specifying its IP address (to which the MC-LAG protocol packets are sent). The possibilities with
respect to peers and MC-LAGs are:
There can be up to 4 peers (as of Release 5.0 Revision 6)
There can be multiple MC-LAGs between peers
A single MC-LAG can only be configured under one peer (that is, it can only exist between
two peers)
If no matching peer can be found, the MC-LAG operates as a standard LAG; it uses its local
system MAC/priority
The source address to be used for peering is configured under the peer; by default the system IP
interface address is used. The LAG ID and system MAC/priority for this MC-LAG are also configured under the peer, remembering that the system MAC/priority has to be consistent across peers.
Note that there is an option to specify the remote LAG identifier in the MC-LAG lag command
to allow the local/remote LAG identifiers to be different on the peers. The selection criteria for
the active sub-group of an MC-LAG are the same as described earlier for standard LAGs, with
one sub-group becoming active and the others staying in standby.
There are two timer options available in the MC-LAG configuration, keep-alive-interval and hold-onneighbor-failure. The keep-alive-interval option specifies the frequency of the messages expected to be
received from the remote peer and is used to determine if the remote peer is still active; if hold-onneighbor-failure messages are missed then it is assumed that the remote peer is down and a local
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
sub-group is made active. Peer failure detection also benefits from next-hop tracking, which causes
the peer to be set to down when no route exists in the routing table that could be used to reach
the peer. In networks without a default route, this would result in the minimum failover time being
approximately the same as the Internet Gateway Protocol (IGP) convergence time.
In dual Control Processor Module (CPM) systems, all MC-LAG information is synchronized to the
standby CPM.
There are two tools provided to the operator for MC-LAG management and troubleshooting:
A debug to monitor the multi-chassis state and packets
A command to force the state (active or standby) of a particular MC-LAG, peer or all MCLAGs (tools perform lag force); the new state remains until it is manually cleared (tools perform
lag clear-force)
Figure 8 shows the MC-LAG status and peering details.
The MC-LAG link status can be shown using the show lag command; it shows the active/standby
state of each link, together with the MC-LAG information as shown in Figure 9.
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
Peer Address
: 1.1.1.3
System Id
: 00:00:00:01:01:01
Admin Key
: 1
Lacp ID in use
: true
Selection Logic : local master decided
Config Mismatch : no mismatch
MC
MC
MC
MC
Peer Lag-id
System Priority
Active/Standby
extended timeout
:
:
:
:
1
100
active
false
------------------------------------------------------------------------------Port-id
Adm
Act/Stdby Opr
Primary
Sub-group
Forced
Prio
------------------------------------------------------------------------------1/1/1
up
active
up
yes
1
32768
------------------------------------------------------------------------------Port-id
Role
Exp
Def
Dist Col
Syn
Aggr Timeout Activity
------------------------------------------------------------------------------1/1/1
actor
No
No
Yes
Yes
Yes
Yes
Yes
Yes
1/1/1
partner
No
No
Yes
Yes
Yes
Yes
Yes
Yes
===============================================================================
A:7450-1#
Port 1/1/1
MC-LAG
Active
Port 1/1/2
Multi-chassis LAG
Control Protocol
DSLAM
Port 1/1/1
LAG 1
MC-LAG
Standby
Port 1/1/2
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
Provider
Network
Multi-Chassis Link Aggregation Groups with Virtual Private Local Area Network Service
An MC-LAG can be used as an access into a VPLS service. The operation provides a single active
connection into the VPLS service from the access system, with redundant standby connections,
similar to a hierarchical VPLS (H-VPLS) multi-tenant unit (MTU)2. The failover of the connectivity to a new active sub-group of an MC-LAG requires the same MAC-WITHDRAW function as
described for an MTU failover. This operation is implemented so that when an MC-LAG sub-group
transitions from active to standby/down within a VPLS service, a MAC-WITHDRAW message
is sent from the connected VPLS provider edge (PE) device to the other VPLS PEs through the
mesh-SDPs. This flushes the MAC addresses that have been learned in the VPLS service by the
LAG/sub-group that became standby/down.
There are no specific configuration commands required to enable this interaction; an MC-LAG
is simply configured (as described in the previous section) together with a standard VPLS service,
including the send-flush-on-failure parameter.
Figure 11 shows an example of an application with a service router connected to a VPLS with MC-LAGs.
Active
MC-LAG
Standard
LAG
VPLS
LAG 1
Standby
MC-LAG
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
Traffic traversing an epipe must always pass between the two endpoints. It can be received over any
of the active objects at the ingress endpoint, but can only be sent out of the egress endpoint over
one object. At an endpoint, an MC-LAG SAP or pseudowire can be protected by associating it with
an Inter-chassis Backup (ICB) pseudowire, which terminates on the ingress of an epipe on a different
PE. In a case where the local MC-LAG SAP/pseudowire fails, the ICB is used as a communication
path for traffic.
Table 2 summarizes the endpoint configuration shown in Figure 12. Note that, for the purposes of
highlighting the configuration and status of the components, all of the devices are 7450 ESSs. However,
as standards-based signaling is used for both the LAG (LACP) and pseudowire status (Label Distribution
Protocol [LDP]), the 7450-2 and the 7450-4 could be any standards-based equipment supporting the
appropriate functions.
Using the epipe on the 7450-1 ESS as an example, the epipe has two endpoints (A and B). Endpoint
A contains an MC-LAG SAP and an ICB; endpoint B contains a pseudowire and an ICB. The ICBs
are cross-connected to the epipe on the 7450-3, to endpoints D and C respectively. Traffic normally
flows between A and B, that is, between the MC-LAG and the pseudowire. If the MC-LAG port
fails, any traffic received over the pseudowire at endpoint B would be transmitted to endpoint A
and then out of the ICB to endpoint D on the 7450-3 (it would then continue to endpoint C). This
path would be used until the 7450-3 makes its MC-LAG active, which would cause SDP 2:100 to
become active-active, making it the new traffic path.
Endpoint
Objects
Endpoint
Objects
7450-1
7450-3
Port
1/1/1
7450-4
SAP
epipe
Standard
LAG
Port
1/1/1
MC-LAG A
epipe
ICB
PW
7450-2
ICB
PW
X
ICBs
Port
1/1/2
Port
1/1/2
LAG
ICB
MC-LAG C
epipe
PW
ICB
epipe
sdp: 2:100
D
PW
7450-3
Standby
10
sdp: 1:100
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
Endpoint
X
Endpoint
Y
SAP
The ends of a pseudowire can have a status of up or down; when up, each end also has a state.
The state is signaled to the remote end of the pseudowire using an LDP notification message (as
described in RFC 4447, PWE3 Using LDP)3. There are a number of pseudowire states; this document restricts the discussion to the active and standby states. At one endpoint, the preferred path
for transmitting traffic is a pseudowire that is up and has both ends in an active state (activeactive). If there are multiple pseudowires up and active at both ends, then the one with the
lowest precedence is chosen as the transmission path. An ICB is preferred over a pseudowire whose
remote state is not active, unless the traffic entered the epipe on another ICB. Note that when no
remote active pseudowire or ICB is available, traffic can be sent on a pseudowire that is up but
with the remote state being other than active.
When an MC-LAG is added to an endpoint of an epipe, the MC-LAG active/standby state is
propagated to the (primary/secondary) pseudowires of the other endpoint in the same epipe, setting
the local state of those pseudowires (but not to the ICB, which is always in an active state when
up). If a SAP is added that is not part of an MC-LAG, its status is always in an active state when
up (note an ICB cannot be used with a non-MC-LAG SAP).
For example, in Figure 12 if the MC-LAG port 1/1/1 on the 7450-1 is in an active state, the 7450-1
signals active on SDP 1:100 to the 7450-2. If the same pseudowire is in an active state on the 7450-2,
that is, the pseudowire is in an active-active state, it will be the transmission path between the two
SAPs. Note that in this situation, the MC-LAG on the 7450-3 is in a standby state, causing SDP2:100
to be standby-active and therefore not used. A failure of port 1/1/1 on the 7450-1 would switch the
active states to the lower transmission path.
The status of the MC-LAG is propagated to the pseudowire, but not vice versa. Therefore, if SDP1:100
fails, the ICB is selected at endpoint B as the egress for traffic on the 7450-1 and SDP2:100 becomes
active on the 7450-2. Traffic from endpoint B through the ICB would arrive at endpoint C on the
7450-3 and be sent to endpoint D on the same epipe. However, as the MC-LAG port on the 7450-3
is in standby state, SDP 2:100 will be in a standby-active state. As SDP 2:100 is the only available
pseudowire (remembering traffic cannot switch between ICBs) and it is up (though locally in a
standby state), it will be used to forward traffic to the 7450-2. Similarly, the 7450-2 will send traffic
on SDP 2:100 even though it is in a standby-active state.
There are a number of detailed rules covering the definition of endpoints:
An epipe can consist of up to two named endpoints (the names are locally significant)
There can be up to one SAP per endpoint
An endpoint without a SAP can have primary and/or secondary spoke-SDPs; only one of these
can be used to forward traffic at any time and the choice is governed by its precedence
There can be up to one primary spoke-SDP per endpoint; if up, it is always used for forwarding
traffic from that endpoint (it has an implicit precedence of 0)
There can be up to 4 secondary spoke-SDPs per endpoint and a precedence value (1-4) can be
configured to determine which of these is used to forward traffic when more than one is up and
no primary is configured (lowest precedence is preferred with a default of 4); if all secondaries have
the same precedence, the one with the lowest virtual channel (VC) ID is chosen (a secondary
becoming active-active does not pre-empt an existing active-active secondary, though it
could pre-empt a secondary that is in an active-non-active state)
There can be up to one ICB spoke-SDP, which must be identified in the configuration; this is
used as a backup forwarding path in case the active MC-LAG SAP goes down, or where there
is no SAP, when all other pseudowires in the same endpoint go down or are in standby state
If an MC-LAG SAP is added to an endpoint, the only other possible addition to that endpoint
is one ICB, noting then that the SAP must be part of an MC-LAG
3
draft-muley-dutta-pwe3-redundancy-bit-00.txt
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
11
If an ICB is added to an endpoint, then a SAP can be added only if it is part of an MC-LAG
If a non-MC-LAG SAP is added to an endpoint, no other objects can be added to that endpoint
An endpoint can have up to four spoke-SDPs; given the rules stated, these can be composed of:
One primary spoke-SDP
One or more secondary spoke-SDPs
One ICB spoke-SDP
An endpoint is up if at least one of its associated objects has a status of up; it is down if all
of its associated objects have a status of down
Forwarding between two ICBs in the same epipe is not permitted
The only situation where traffic is dropped is when all objects in an endpoint are down-standby
or the only available path involves forwarding between ICBs
To describe the configuration of an MC-LAG with redundant Ethernet VLL pseudowires, consider
the simple example shown in Figure 12.
An epipe is configured on the 7450-4 containing a SAP and a two-port standard LAG. An MC-LAG
is created between the 7450-1 and the 7450-3 using the MC-LAG configuration shown in Figure 7:
Configuration for one peer of an MC-LAG. An epipe is created on both the 7450-1 (with endpoints
A and B, and their associated ICBs)
Figure 13. MC-LAG with Redundant Ethernet VLL Pseudowires
and the 7450-3 (with endpoints C
the 7450-1 Configuration
and D, and their associated ICBs)
to connect the MC-LAG to the
spoke-SDPs. Figure 13 shows the
epipe 1 customer 1 create
endpoint A create
configuration on the 7450-1.
exit
endpoint B create
exit
sap lag-1:100 endpoint A create
exit
spoke-sdp 1:100 endpoint B create
exit
spoke-sdp 3:200 endpoint A icb create
exit
spoke-sdp 3:300 endpoint B icb create
exit
no shutdown
12
exit
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
The MC-LAG sub-group on the 7450-1 is active and that on the 7450-3 is standby. This status
is propagated to the spoke-SDPs in the epipe on the 7450-2, as can be seen from the Peer Pw Bits
shown in Figure 17; none on 1:100 indicating the bits are clear (that is, the pseudowire is active),
and pwFwdingStandby on 2:100 indicating that the pseudowire is standby. Note that both spokeSDPs are part of endpoint X.
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
13
The SDPs in the active endpoint in the epipe on the 7450-2 can also be as shown in Figure 17.
Figure 17. The 7450-2 Epipe SDPs in the Active Endpoint
A:7450-2# show service id 1 sdp detail
===============================================================================
Services: Service Destination Points Details
===============================================================================
------------------------------------------------------------------------------Sdp Id 1:100 -(1.1.1.1)
------------------------------------------------------------------------------SDP Id
: 1:100
Type
: Spoke
VC Type
: Ether
VC Tag
: n/a
Admin Path MTU
: 0
Oper Path MTU
: 1556
Far End
: 1.1.1.1
Delivery
: MPLS
Admin State
Acct. Pol
Ingress Label
Ing mac Fltr
Ing ip Fltr
Ing ipv6 Fltr
Admin ControlWord
Last Status Change
Last Mgmt Change
Endpoint
Flags
Peer Pw Bits
Peer Fault Ip
Peer Vccv CV Bits
Peer Vccv CC Bits
MAC Pinning
::
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
Up
None
131069
n/a
n/a
n/a
Not Preferred
03/19/2007 13:19:18
03/19/2007 13:19:18
X
None
None
None
lspPing
mplsRouterAlertLabel
Disabled
Oper State
Collect Stats
Egress Label
Egr mac Fltr
Egr ip Fltr
Egr ipv6 Fltr
Oper ControlWord
Signaling
:
:
:
:
:
:
:
:
Up
Disabled
131070
n/a
n/a
n/a
False
TLDP
Precedence
: 1
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
Up
None
131068
n/a
n/a
n/a
Not Preferred
03/19/2007 13:28:49
03/19/2007 13:28:49
X
None
pwFwdingStandby
None
lspPing
mplsRouterAlertLabel
Disabled
Oper State
Collect Stats
Egress Label
Egr mac Fltr
Egr ip Fltr
Egr ipv6 Fltr
Oper ControlWord
Signaling
:
:
:
:
:
:
:
:
Up
Disabled
131068
n/a
n/a
n/a
False
TLDP
Precedence
: 4
Two additional control parameters are available under the endpoint command: revert-time and activehold-delay. The revert-time parameter is the time taken to revert back to the primary spoke-SDP from
a secondary spoke-SDP when the primary becomes active on both ends (active-active). The activehold-delay parameter is a delay in sending the standby status indication on the endpoint spoke-SDPs
14
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
when the local MC-LAG transitions to standby-down; this can be useful to allow time for the peer
MC-LAG system to update its MC-LAG/spoke-SDP status information before informing the remote
pseudowire peer of the standby status. An example for the use of the active-delay parameter is
explained for the scenario presented in Figure 20.
As with the MC-LAG, an operator command is available to force the switchover of an endpoint
to a specific spoke-SDP (tools perform service id <service-id> endpoint <endpoint-name> forceswitchover <sdp-id>).
To enable full redundancy across a provider network between two access systems, the logical
topology shown in Figure 18 can be built.
Figure 18. Full Redundancy with MC-LAG with Redundant Ethernet VLL Pseudowires
Active
MC-LAG
PW
ICB
Standard
LAG
ICB
PW
Active
Active
ICB
ICB
LAG
Standby
MC-LAG
ICB
Standby
Standby
PW
PW
MC-LAG
ICB
PW
PW
Standby
ICB
Standard
LAG
ICB
Standby
Active
Standby
Active
PW
ICB
ICB
Active
PW
LAG
MC-LAG
SPOKE-SDPs
In Figure 18 the active link in the LAGs determines the local state of the spoke-SDPs; only a single
path in the topology will be active-active and therefore this will be the traffic path. A failure of a
single LAG link or an individual system will result in a different active-active path which would
be the preferred communications path.
If one of the active MC-LAG SAPs fails on the LAGs, the active status moves over to the other half
of the MC-LAG. If that happens, there could be in-flight traffic received by the system on which the
MC-LAG SAP failed; in order to avoid dropping that traffic, it is forwarded over the ICB to the other
half of the MC-LAG (to be forwarded through the epipe out of its MC-LAG SAP).
The configuration of one of the
MC-LAG systems would have an
epipe with two endpoints; one with
an MC-LAG SAP and an ICB, and
the other with two spoke-SDPs and
an ICB, as shown in Figure 19.
A more complicated configuration
could be used to provide redundancy
for multiple failures for DSLAMbroadband remote access server
(BRAS) communications; this is
shown in Figure 20.
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
15
SAP
PW
ICB
ICB
BRAS
LAG
Precedence = 1
ICB
ICB
MC-LAG
PW
MC-LAG
PW
ICB
ICB
Precedence = 2
Precedence = 3
B
LAG
ICB
ICB
MC-LAG
PW
PW
PW
PW
PW
Precedence = 4
Standard
LAG
SAP
BRAS
Traffic Path
MC-LAG
SPOKE-SDPs
SAP
F
DSLAM
Active-Active
Standby-Active
The design goal illustrated in Figure 20 is, if the active path were to fail, the next lower physical
path could be used, thereby keeping the customers on the same BRAS. If for some reason both of
the top two paths failed, communications would be restored by one of the lower two paths, albeit
to a different BRAS.
In this topology, Service Router-F has 4 secondary spoke-SDPs connecting to the central three
service routers, which in turn have MC-LAGs to the two service routers on the left. In order to
avoid the possible loss of communication that would occur by reverting to a newly active primary
pseudowire, all of the spoke-SDPs on Service Router-F are secondaries. Notice that there are two
paths that are active-active; however, the precedences are such that the upper spoke-SDP is
preferred. In fact, traffic from right to left will only use the top path due to the precedence, but
traffic from left to right could be received on Service Router-F over either active-active path
(the precedence only governs the forwarding of traffic, not its receiving).
In order to ensure that if the A-C path fails, the next lowest path (A-D-F) becomes active, it is
necessary to specify an active-hold-delay, equal to, for example, 3 (300 ms). This gives time for the next
lowest path to be signaled as active-active from end-to-end (specifically for the LAG link A-D to
become active). Without this, there would be the possibility that Service Router-F would choose
the path F-D-B before path F-D-A was fully signaled as active. The ICB from the MC-LAG endpoint on Service Router-C would be used to forward traffic to Service Router-D then to Service
Router-A until Service Router-F has been notified of the failure of the A-C link.
16
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
Figure 21. The 5620 SAM Service View of a Simple LAG with Redundant Ethernet VLL Pseudowires (see Figure 12)
Conclusion
In this document, the concept of LAGs has been described, showing the evolution from a basic
LAG, through the use of sub-groups to achieve a LAG with specific QoS, to MC-LAGs in which
one end of the LAG can span two systems. Configuring the MC-LAG as access to a VPLS service
was discussed, and extending the LAG across a providers network using redundant Ethernet VLL
pseudowires was detailed.
There are no specific operations, administration and maintenance (OA&M) changes relating to
MC-LAGs; the entire current portfolio of OA&M services (MAC/label switched path [LSP]/SDP/
service ping/traceroute) continues to be applicable to the configurations described in this document.
Providers can go beyond using LAG technology as a simple way to increase capacity by using LAGs
to provide increased redundancy, both at the network edge and within the service delivery infrastructure. The MC-LAG capability provides a unique way of extending redundant connections to
the network access, increasing uptime in triple play services when used in conjunction with DSLAMs,
or in business services when used with a Layer 2 CPE. Having maximized the edge redundancy, providers
can combine MC-LAGs with both VPLS and Ethernet VLL services to achieve end-to-end redundancy
across their networks.
Multi-Chassis Link Aggregation Group Redundancy for Layer 2 Virtual Private Networks | Technology Information Guide
17
www.alcatel-lucent.com
Alcatel, Lucent, Alcatel-Lucent and the Alcatel-Lucent logo are trademarks of Alcatel-Lucent. All other
trademarks are the property of their respective owners. The information presented is subject to change
without notice. Alcatel-Lucent assumes no responsibility for inaccuracies contained herein.
2008 Alcatel-Lucent. All rights reserved. WLN2468071224 (02)