Escolar Documentos
Profissional Documentos
Cultura Documentos
Re-Optimized SPVC
NEW NODE NEW NODE NEW NODE NEW NODE NEW NODE NEW NODE NEW NODE NEW NODE NEW NODE NEW NODE NEW NODE NEW NODE NEW NODE NEW NODE
Chapter 17 Network Reconvergence 413
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
rerouting or make-before-break rerouting, because a rerouted connection
segment is established before the incumbent connection segment is
released. These features are based off of the ATM Forum's mandatory
requirement for asymmetrical soft rerouting procedures specified for PNNI
edge-based rerouting.
The Reduced Cell Loss (RCL) mechanism offers the reduction of cell loss
during path optimization, thus minimizing network disruptions. This RCL
is accomplished via Operations and Maintenance (OAM) cells and the
placement of OAM Segment Boundaries. It marks the End of Transmission
(EOT) on the incumbent connection segment, which triggers the data path
swap at the rendezvous node.
The cell loss during the data path swap is proportional to the trip delay of
the incumbent connection segment and is less than that of the mechanism
used by the path optimization procedure based on the guidelines in the
ATM Forum.
Access Control Impact of PNNI
Figure 17-14: Access Control
Security of ATM connections is handled through the Access Control
Mechanism. Access control on both the called and calling address can be
done.
Why is this required? If it was not there, any connection could initiate and
terminate on any switch, allowing the possibility of security issues to arise.
Access Control lists can be used for allowing/denying connectivity to
business partners as they change: a simple change to access privileges to
enable connectivity.
Destination
4720
Source
4710
Setup to
4720
Setup to
4720
4720
UNI Calling
Address Filter
Allow
4730
6510
DisAllow
8660
9630
4710
UNI Calling
Address Filter
Allow
4730
6510
DisAllow
8660
9630
4710
Allow
4730
6510
DisAllow
8660
9630
4710
Allow
4730
6510
DisAllow
8660
9630
4710 86.10 4710 4710
47.10
UNI Called
Address Filter
Allow
4720
4730
6510
DisAllow
4750
4770
8660
UNI Called
Address Filter
Allow
4720
4730
6510
DisAllow
4750
4770
8660
Allow
4720
4730
6510
DisAllow
4750
4770
8660
Supported Address Formats (including
summarized) : X.121, DCC, E.164, ICD
414 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
QoS Variance
Variance is provisioned per QoS. It allows a call to see more available
paths in the network. The green lines in Figure 17-15 represent the paths
available for a call to be routed on.
Figure 17-15: QoS variance under PNNI
For example, assume the middle two paths in Figure 17-15 provide a Cell
Transfer Delay (CTD) of 150 ms. The upper path provides a CTD of 250
ms and the lower path a CTD of 220 ms.
Without variance, if the call setup request requires a Cell Transfer Delay of
200 ms, only the middle two paths would be acceptable. This limits the
choices in routing the connection across the network. If there are many
such connections, the middle two paths will become more heavily loaded
than the only slightly different upper and lower paths.
If the QoS variance is set to 25%, all paths meeting a CTD metric of 250
ms (200 ms*125%) would be acceptable. Thus, the upper and lower paths
could be used and calls would have more paths to choose from. Combined
with one of the load balancing algorithms, this will provide excellent call
and load distribution.
What you are seeing is the ability of the ATM network to evaluate all of the
available links within the network and determine the ability of each
individual link to provide the desired QoS. This can be a very tricky thing;
however, the network engineer can use this to their advantage. By slightly
relaxing the QoS requirements on a service, the network is available to a
number of parallel links allowing ATM to load share across the number of
links. This makes data transfer more efficient across a network. By setting
the connection QoS requirements too stringently, the network engineer can
choke the links not allowing connection completion even while bandwidth
is available.
Source Source
Destination Destination
Setup
Request
Setup
Request
Load
Balance
Load
Balance
Variance Variance
Acceptable Paths Acceptable Paths
Chapter 17 Network Reconvergence 415
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
What you should have learned
In this chapter, you learned about the options and strategies that are
available to configure and design a fault-tolerant network that will be able
to withstand failure and continue to provide quality and service. There are a
number of ways that this can be done. It is the network engineers
responsibility to provide a cost-effective means to accomplish this goal.
Remember, the selection of redundancy features and strategies should be
predicated on the impact (measured in terms of loss of revenue) of network
features. Determination will depend on where you are introducing elements
into the network and where partial or full redundancy is required. You
should also be able to evaluate which protocols and ports can be used to
provide higher layer redundancy. All of these things are blended together to
provide a reliable, cost-effective network.
416 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
417
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Chapter 18
MPLS Recovery Mechanisms
Ali Labed
.
Figure 18-1: Transport path diagram
Concepts covered
The various MPLS protection schemes
The components of an MPLS recovery solution
LSP Setup using RSVP-TE
LSP monitoring, detection, and notification using RSVP-TE and
ITU-T Y.1711
MPLS Scope of Recovery - Global and Local
Introduction
Network Survivability refers to the capability of the network to maintain
service continuity in the presence of faults within the network. This can be
MPLS
A
u
d
i
o
V
o
i
c
e
V
i
d
e
o
codec
RTP
R
T
C
P
R
T
S
P
S
I
P
H
.
3
2
3
H
.
2
4
8
/
M
G
C
P
/
N
C
S
R
e
a
l
-
T
i
m
e
C
o
n
t
r
o
l
UDP / TCP / SCTP
IP
ATM
AAL1/2 AAL5
FR Ethernet
SONET / TDM
Copper WLAN Cellular xDSL
DOCSIS
HFC
Q
o
S
R
e
s
i
l
i
e
n
c
y
Media Related
Session
Control
Gateway
Control
Media Session Related
Fiber
Packet
Cable
View From
Application
Perspective
To
Ntwk
MPLS
MPLS
A
u
d
i
o
V
o
i
c
e
V
i
d
e
o
codec
RTP
R
T
C
P
R
T
S
P
S
I
P
H
.
3
2
3
H
.
2
4
8
/
M
G
C
P
/
N
C
S
R
e
a
l
-
T
i
m
e
C
o
n
t
r
o
l
UDP / TCP / SCTP
IP
ATM
AAL1/2 AAL5
FR Ethernet
SONET / TDM
Copper WLAN Cellular xDSL
DOCSIS
HFC
Q
o
S
R
e
s
i
l
i
e
n
c
y
Media Related
Session
Control
Gateway
Control
Media Session Related
Fiber
Packet
Cable
View From
Application
Perspective
To
Ntwk
MPLS
418 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
accomplished by recovering 'quickly' from network failures [INTERNET-
TE]. The requirement on the maximum length of the interrupted period is
dependent on the application. This chapter covers network survivability in
Multiprotocol Label Switching (MPLS) networks.
Traditional IP networks support only one class of servicethe best-effort
classand network survivability is provided by Layer 3 rerouting. This
constitutes a concern for applications requiring highly reliable service,
since the recovery in Layer 3 ranges from several seconds to minutes due to
the time needed by a Layer 3 routing algorithm to converge (and the hold-
off timers needed so that all routing advertisements connected with a
failure can be collected).
Since MPLS [MPLS-ARCH] is a technology of choice in future IP-based
transport networks that is gaining a wide acceptance among equipment
vendors and network operators, it is important that MPLS-capable products
be able to provide protection and restoration of traffic. MPLS networks
establish Label Switched Paths (LSP), where packets with the same label
follow the same path. This potentially allows MPLS networks to
preestablish protection LSPs for working LSPs and achieve better
protection switching time than those provided by legacy IP networks.
All LSPs do not necessarily need the same degree of protection, such as the
recovery time and the attributes of the recovery path (for example,
bandwidth and maximum number of hops). Therefore, one can use the LSP
preemption priority as a service differentiation parameter.
The recovery mechanisms presented in this chapter are based on the use of
RSVP-TE (Reservation Protocol with Tunnelling Extensions) signaling
protocol and the Operation, Administration, and Maintenance (OAM)
mechanisms described in the ITU-T Y.1711 and MPLS ping. The rationale
behind the choice of RSVP-TE (over LDP) is that it is generally agreed to
be the predominant signaling protocol in MPLS networks.
MPLS protection schemes
Depending on the targeted degree of network availability, one can choose
among several LSP protection strategies and protection schemes. The first
of these strategies is protection switching and rerouting.
Protection switching and rerouting
In Protection switching, also called Path switching, the protection path is
preplanned: it is created prior to the detection of a failure on the working
path (generally at the same time the working path is created). The case
where the resources are reserved on the protection path at the time this
latter is created is called prereserved resources. The alternative is called
on-demand resource reservation. In this latter option, the resources on the
recovery path are not reserved until the failure is detected.
Chapter 18 MPLS Recovery Mechanisms 419
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
In rerouting, the recovery path is not preplanned, but established on
demand. The operation of finding (creating) a protection path is postponed
until detection of a failure on the working path. The protection path is said
to be created dynamically, and, therefore, the only resource reservation
option available is resources reserved on demand.
In order to achieve the lowest recovery time possible, it is required to use
Path-switching, rather than rerouting. The path-switching option further
improves the recovery time when used with the prereserved resources
option. However, the on-demand resource reservation option achieves
better resource utilization, as no resources are committed until after the
fault occurs.
Protection schemes
The expression protection scheme, or protection model, designates the
strategies noted 1+1, 1:1, 1:N and M:N. In all of these strategies, the
protection path must be disjointed from the corresponding working path on
the working path segment (link, node, or the whole path) that is targeted for
protection by the protection path. Two paths are said to be disjointed on a
given segment of the working path if a failure on one path does not cause a
failure on the other path on the same segment.
There are several protection schemes, corresponding to various degrees of
network availability. Either one protection path protects one working path,
namely the models 1+1 and 1:1, or one protection path protects N working
paths, noted 1:N. This latter model may be generalized by allowing M
protection paths to protect N working paths (M<N), noted M:N.
In the 1+1 strategy, the traffic travels on the working path LSP and on the
protection path LSP, simultaneously. Obviously, the working and the
protection paths must be disjointed. Under normal operational conditions,
the LSPs egress Label Edge Router (LER) accepts the packets arriving on
the working path LSP and drops those arriving on the protection path LSP.
However, upon occurrence of a failure on the working path LSP, and after
the egress LER of that working path LSP receives the failure notification
(or detects the failure itself), the egress LER starts accepting packets
arriving on the protection path.
The traffic is carried only on the working path in the case of the 1:1 (1:N)
strategy. The protection path may carry a lower priority traffic that may be
preempted upon occurrence of a failure on the protected path (on one of the
protection paths). The M:N protection model is a variant of the 1:N model
where there are M protection paths instead of 1. In these three protection
models, all paths must be disjoint (that is, physically and logically
separated, except for the endpoints) in order to meet the requirement of
protection against single failures.
420 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
In the 1:N protection model (N>1), upon occurrence of a failure on one of
the N working paths, the traffic that used to travel on that failed path is
switched to the protection path. Here, the remaining N1 working paths are
no longer protected and a new protection path(s) must be identified as soon
as possible.
The degree of network availability decreases in the following list of
protection schemes from left to right: 1+1, 1:1, 1:N. However, the higher
the degree of network availability provided by a protection model, the
higher the resource consumption of that protection model (assuming that in
the cases of 1:1 and 1:N, the protection path is used to carry lower priority
traffic).
Components of an MPLS recovery solution
This section discusses protection of LSPs traffic, and describes the
functional components that are required for any LSP protection solution.
The ensemble of those components is called a Recovery System.
The list of components of a Recovery System in an MPLS network follows
the intuitive chronological sequence involved in a recovery operation.
There is a need to have surveillance means, called Monitoring and
Detection, in order detect failures or defects. Once a failure is detected,
another component, called Notification, transmits a notification message to
the network element, called decision node or Path Switch LSR (PSL),
that has the responsibility to act upon the message. The decision node then
finds an alternate path (if not preplanned), sets an LSP on that path (if not
prereserved), and switches the traffic to that new path. Finally, in certain
cases, such as when the protection path is of a lesser quality than the
original working path, there is a need for a mechanism to switch back, or
revert, the traffic to the recovered working path. The functionality is
fulfilled by the Switch-Back component.
In order for this system to be complete, it is required to be able to associate
the protection path with its corresponding working paththe function
fulfilled by the Association-Configuration component. This component
also has the responsibility of interfacing with RSVP-TE to set up/tear down
an LSP and provides RSVP-TE with the explicit route object. The second
required component is a routing mechanism used in order to find LSP
routes based on a set of constraints. These two components are not fully
described in this text; however, we shall give a brief description about the
features that are important to the present topic. A last component, which is
not covered here, is needed for resource optimization purposes in order to
achieve path maintenance. The optimal arrangement of paths is not
necessarily achieved by incremental adding of paths to the network.
Chapter 18 MPLS Recovery Mechanisms 421
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Association-Configuration component
There should be a component that keeps track, at the decision nodes, of the
association between the Working Paths (WPs) and their corresponding
Protection Paths (PPs).
Monitoring component
In order to detect the faults (described in the Detection section), the
network must be monitored. An important issue addressed by the
monitoring component is the frequency of monitoring. The higher the
frequency of monitoring, the faster the defect is detected, but the higher the
overhead incurred. The recovery target time dictates the frequency of
monitoring.
Furthermore, monitoring may be available at the network layers that are
below the MPLS layer, such as the physical, and the data-link layers. At
those layers, the monitoring happens at higher frequencies than at higher
layers.
Detection component
The design of the detection functionality needs to address the issues of
which defects to detect, which component detects each of the defects, and
at which network layer. Mechanisms must be provided in order to detect
network connectivity defects. Furthermore, in the context of LSP
protection, there is a need for an MPLS layer OAM mechanism in order to
detect MPLS-fabric defects. Two dimensions to this is sins of omission
(that is, missing heartbeat, large gaps in sequence numbers) and sins of
commission (leakage of traffic). Defects may result in multiple points of
detection; not all of which may be able to perform notification. Network
impairments, such as congestion, that result in lower throughput are not
included because they are outside the scope of this chapter.
Notification component
Upon occurrence of a fault and after its detection, the detecting node
needs to have means to notify the decision node, and needs to know when it
should send its notification: immediately or after a predefined delay. In
other words, the Notification mechanism addresses the following issues:
Who to notify. The notification message needs to be transported
and forwarded from the detection node to the decision node.
When to notify. In order to allow a lower layer recovery
mechanism, if any, to take place, the MPLS layer must react to a
defect if it persists for a time defined to be long enough to allow the
lower layer to try to recover from the failure.
How to notify. The notification message needs a transport method.
422 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Path Switching
After being notified of a failure on the working path of an LSP, the decision
node needs to switch traffic from the failed working path to the
corresponding protection path.
Routing component
Within the context of MPLS, the path of an LSP can be computed and
established in various ways. This task can be achieved by an Interior
Gateway Protocol (IGP), which chooses the (generally) shortest path.
Other ways to compute and establish an LSP path include manually,
automatically online using constraint-based routing processes, and
automatically offline using constraint-based routing entities implemented
on external support systems [INTERNET-TE].
Constraint-based routing system refers to a class of routing systems that
compute routes through a network subject to the satisfaction of a set of
constraints and requirements [INTERNET-TE]. Constraint-based routing
processes can be provided by a Traffic-Engineering system. The latter can
be centralized or distributed. In the centralized design, all decisions are
centralized as well as the necessary information about the network to take
those decisions. In the decentralized case, the decisions are taken by each
router autonomously based on the routers view of the state of the network,
which requires a protocol between these decision entities in order to
exchange information on that state.
Switching back component
In certain scenarios, such as when the protection path is of a lesser
quality (for example, less bandwidth) than the working path, there is a need
to have a switchback operation. It consists in the traffic being switched to
the original working path after it recovers from the failure.
Monitoring, detection and notification mechanisms
The following section explores the mechanisms for monitoring the network
for faults, detecting those faults and notifying about them. The section also
describes why the RSVP state refresh protocol is not applicable in large
networks. That drawback is overcome by the RSVP-TE Hello protocol. We
proceed with a description of ITU-T Y1711 and LSP Ping and show how
these protocols complement RSVP-TE Hello.
LSP set up and state refresh using RSVP-TE
RSVP-TE is used when setting up (signaling) LSPs. Pinned up LSPs are
considered; that is, an RSVP-TE setup (Path) message includes an Explicit
Route object to set up an LSP.
Chapter 18 MPLS Recovery Mechanisms 423
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
In order to set up an LSP, the LSP ingress LER builds an RSVP-TE Path
message and sends it to the LSP egress LER. The Path message includes
the explicit route (ER) that must be followed by that message. The Path
message establishes an RSVP Path state on each node it traverses (and
must be listed in the ER object) towards the egress LER. In order to
respond to that Path message, the egress LER builds a Resv message and
sends it to the Ingress LER. The Resv message crosses, in the reverse
direction, the same path traversed by the corresponding RSVP Path
message and establishes an RSVP Resv state on each node it traverses
towards the Ingress LER (see Figure 18-2). Those states include the
parameters associated with the corresponding LSP.
Figure 18-2: LSP setup using RSVP-TE signaling protocol
The RSVP (Path and Resv) states are called soft states; that is, they need
a periodic refresh in order not to be deleted (and the corresponding LSP
torn down). For a given LSP and a given node traversed by that LSP, the
RSVP-TE refresh operates as follows:
The RSVP-TE process on the node periodically retransmits to its
downstream neighbor, the Path message.
The RSVP-TE process on the node periodically retransmits to its
upstream neighbor, the Resv message.
A node detects a failure when it does not receive the expected refresh
message from a neighbor after a (configurable) delay. Upon detection of a
failure, a node sends an RSVP tear message to the end node.
Figure 18-3: RSVP tear message upon detection of a failure
In order to detect faults in a timely fashion, the refresh messages must run
at a high frequency. However, this raises the scalability issue as the refresh
mechanism is per LSP. This issue has been overcome in RSVP-TE. The
latter extends RSVP with several features and mechanisms, including a
keep-alive protocol called RSVP-TE Hello protocol that is described in the
sequel.
Ingress Egress
S1 R
N1 N
2
N4 N3
1: Path
2: Resv
X
PathTearr ResvTear
Ingress Egress
S1 R
N1 N2 N
4
N3
424 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
RSVP-TE Hello
RSVP-TE Hello protocol provides node-to-node failure detection. It runs
between neighboring nodes (at the control plane). It is a Layer 3 keep-alive
mechanism that enables RSVP nodes to detect when a neighboring node is
not reachable. The neighbors periodically exchange keep-alive (Hello)
messages. The loss of communication with a neighbor is declared after a
configurable number (default=3) of consecutive Hello messages are
missing. A node can detect a loss of communication with a neighbor over a
specific link (in case multiple links run between neighbors, a different
instance of RSVP Hello runs on each link). When such a fault is detected,
the detecting node reacts exactly in the same way as when a fault is
detected through nonreception of an RSVP refresh messages.
In the context of LSP protection, there are two types of faults that need to
be detected:
Network connectivity faults. An interruption in the data path.
MPLS fabric defects. The data continue to be forwarded, but on a
wrong path, not the one that was configured for it to be forwarded
on. In other words, the symptom of MPLS fabric defects is the
sending, by any node on an LSP path, of packets of a certain LSP
on a different LSP. This symptom is called misrouting.
RSVP-TE Hello protocol detects network connectivity defects, but does
not detect MPLS fabric defects. There is a need for an MPLS layer OAM
mechanism in order to detect MPLS-fabric defects. Two mechanisms are
available to fulfill that role: ITU-T Y.1711 and IETF MPLS Ping. From a
functionality point of view (that of detecting MPLS table failures), both
mechanisms are similar. MPLS ping offers a supplementary functionality
that supports debugging. After occurrence of a failure, an operator can use
MPLS ping in order to locate the failure.
ITU-T Y.1711
ITU-T Y.1711 provides a mechanism for path continuity test in order to
detect path failures. In this scheme, the Ingress of an LSP periodically
inserts, in the LSP, a specific OAM packetcalled Connectivity
Verification (CV)into the concerned LSP (in-band). The Egress of the
LSP detects a defect on the LSP when it does not receive three consecutive
CV packets for that LSP, at which time it sends a Backward Defect
Indication packet (BDI) to the Ingress of that LSP to notify the Ingress
about the fault. Therefore, the Ingress is the only node that has the ability to
recover from a failurecalled the Path Switch LSR (PSL).
ITU-T Y.1711 includes a mechanism, whereby, on any node on a path of an
LSP, a layer below the MPLS layer that detects a fault notifies the MPLS
layer. The MPLS layer sends a Forward Defect Indication (FDI) towards
the Egresses of the affected LSPs. As the lower layer defect detection and
Chapter 18 MPLS Recovery Mechanisms 425
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
notification time is relatively instantaneous, this mechanism allows
achieving very low recovery times (tens to hundreds of milliseconds).
Furthermore, this mechanism, along with the CV heartbeat mechanism,
provides a bounded detection time for all defects in the LSP path.
Figure 18-4: ITU-T Y.1711 notification mechanism
MPLS Ping
MPLS Ping is used to detect data-plane failures in MPLS LSPs. This
mechanism is modeled after the ping/traceroute philosophy. It operates
under two modes. The first mode is the fault detection at the data plane,
where MPLS Ping is used for connectivity checks, while the second mode
complements the first one by providing a fault isolation mechanism. In this
mode, Traceroute is used for hop-by-hop fault localization.
MPLS Ping may be used in various ways. The following depicts how it can
be used for LSP path continuity test, which belongs to the first mode. In
this case, MPLS Ping operation is similar to that of ITU-T Y.1711. The
Ingress of an LSP periodically inserts, in the LSP, an Echo packet and
expects to receive a Reply message from the expected (the configured) LSP
egress. A failure is detected when either a no reply is received or a different
egress responds with the inclusion of the corresponding error code.
However, unlike ITU-T Y1711, MPLS Ping relies on many non-LSP
components, and a fault notification is not a reliable indication of an actual
problem on the LSP. Furthermore, with no alarm suppression mechanisms,
a ping failure is not coordinated with local detection mechanisms. If a link
fails, both the local LSRs and the pinging LSR will detect a problem in an
uncoordinated fashion.
Monitoring times and overhead
In any of the cited monitoring mechanisms, the higher the frequency of
monitoring, the faster the response time for fault detection. However, the
higher the frequency of monitoring, the higher the overhead incurred. This
raises the issue of scalability because the polling is per LSP, and the
processing overhead (of polling messages) increases dramatically with
higher polling frequencies. The detection time is generally set to three
times the periodicity of monitoring. The target recovery time dictates the
LSP
X
Ingress
LER
1 2 3 4
5
BDI
Egress
LER
426 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
frequency of monitoring. ITU-T Y.1711 stipulates the frequency of
monitoring to be one (1) second; whereas, it is configurable in RSVP-TE
Hello and in MPLS Ping.
The overhead generated by the monitoring mechanism is not the only
limiting factor to the monitoring frequency. The interlayer recovery
coordination is another limiting factor. In order to allow a lower layer
recovery mechanism, if any, to take place, the MPLS layer must react after
the expected recovery time of the lower layer has elapsed.
MPLS scope of recovery global and local
The topology scope of a recovery solution can be Global, also called end-
to-end or Local. This section describes these two solutions, along with a
description of RSVP-TE extension for local protection. The expressions
local repair, local recovery, and fast reroute (FRR) are used
interchangeably.
Global recovery
In global recovery (abbreviation: Global), also called end-to-end recovery,
the working path LSP is protected by an end-to-end protection path LSP
(see Figure 18-5). The latter is preestablished and does not depend on use
of either Path-switching or rerouting.
The working path LSP and its corresponding protection path LSP are
disjoint; that is, they have the same end points
1
(ingress and egress LERs)
and those are the only network elements that they have in common. In this
case, Global protects against link and node failures on the working path,
except for the ingress (and possibly the egress) LER. However, in certain
cases, Global protects only a segment of the working path LSP. In which
case, the protection path LSP starts at a node downstream from the ingress
LER node, called PSL (Path Switch LSR), and merges back with the
working path LSP at a node upstream from the egress LER (called PML:
Path Merge LSR).
In both cases, upon occurrence of a fault, the PSL or the ingress LER
receives a notification message. It is responsible for switching the traffic
from the working path to the protection path. The time the fault notification
message takes to reach the PSL (or the ingress) is important because it
makes the recovery time unacceptable for certain applications. Local
recovery overcomes this problem.
1. The Egress node of the working path LSP may be different from that of the protection path LSP, if the
destination of the traffic is reachable through another Egress. In this case, the protection path LSP
protects against failures on the working path Egress, as well.
Chapter 18 MPLS Recovery Mechanisms 427
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Local recovery
In local recovery, a node traversed by an LSP protects against failures on a
subtending link or on a neighboring node, traversed by that LSP. In other
words, it is the node directly upstream of the failed component that is
responsible for detecting the failures and switching the traffic on an
alternate route (using Protection-switching or rerouting). That node is
called Point of Local Repair (PLR). The local protection path LSP merges
back with the main working path LSP, at a downstream node, called Path
Merge LSR (PML) (see Figure 18-5).
Local recovery achieves better recovery times than Global, since the
notification message does not have to travel upstream to the Path Switch
LSR (PSL) (the PLR is itself the PSL).
Link and Node Recovery
Depending on the downstream network element, link, or node protected by
a PLR, local repair can be used to protect against failures on the subtending
link or to protect against failures on a neighboring node. The local node
protection also protects against failures on the subtending link connecting
to the neighbor node. The first one is called link recovery, and the second is
named node recovery. The following figure illustrates these two cases.
PP_1 is a node recovery, whereas PP_2 and PP_3 correspond to link
recovery.
Figure 18-5: Global (end-to-end) and local protection
Node recovery imposes itself when the downstream node is deemed to be
unreliable, otherwise link recovery is sufficient. Therefore, the choice
between these two alternatives is based on the degree of nodes reliability
and the target level of network availability.
RSVP-TE extension for local recovery
The IETF draft draft-ietf-mpls-rsvp-lsp-fastreroute-01.txt [FRR_RSVP]
extends RSVP-TE with objects and flags as well as with new mechanisms
LSP
Global LSP protection
Ingress
LER
Egress
LER
PP_3
6
7
PP_1
PP_2
8
9
X
1 2 3 4
5
X X
428 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
in order to handle MPLS local protection (also called Fast-ReRoute: FRR).
The new mechanisms that extend RSVP-TE enable the redirection of traffic
onto local backup LSP tunnels in the event of a failure.
There are two techniques to set up local backup LSPs: 1:1 and Bypass-
Tunnel. In the 1:1 technique, each LSP will be associated to a local
protection LSP. In the Bypass-Tunnel technique, multiple local protection
path LSPs are nested in one LSPthe Bypass-Tunnel LSP. In the latter
technique, only one protection path LSP needs to be maintained between
the PLR and the PML, which saves on resources, including labels and
RSVP refresh overheads.
MPLS recovery versus IP (IGP) recovery
For the sake of clarity, one example of IGP, in this case OSPF, is
considered. In OSPF, when a router detects that one of its interfaces is
down, it floods a Link State Advertisement (LSA) to all other routers in the
network (in its area). That LSA will be received by the routers neighbors
first, which, after an initial processing on that LSA, forward it to their
neighbors (except the one from which the LSA was received). This process
continues until all routers receive that LSA. Each router receiving that LSA
waits a certain delay (SPF Delay), calculates a new shortest path tree, then
reroutes the traffic according to this new tree.
The four main factors composing the OSPF recovery time are the detection
time, the notification time, the SPF Delay, and tree recalculation time.
Recovery Time = detection + notification + SPF Delay + tree re-
calculation
The SPF Delay + tree recalculation is called the convergence time. The
SPF Delay is a vendor-defined parameter. It is used in order to avoid
frequent shortest path calculations. Its value is usually set to five seconds.
As the value of SPF delay decreases, the risk of higher recovery times
increases. The tree recalculation time varies with the size of the network,
and the degree of connectivity (number of interfaces) in that network.
The MPLS recovery time is composed of the following times:
the detection time
the notification time
the switching time
Recovery Time = detection + notification + switching
As MPLS and OSPF use a Hello protocol for failure detection, the
detection time in MPLS and OSPF should be very close.
Due to the flooding mechanism in OSPF, the notification time in OSPF is
shorter than in MPLS (but not more than a couple of hundred
Chapter 18 MPLS Recovery Mechanisms 429
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
milliseconds). However, this difference disappears if MPLS local
protection is used (as opposed to MPLS global protection).
The MPLS switching time is significantly small compared to the
convergence time introduced by OSPF.
This shows us that MPLS achieves better recovery times than OSPF, and
the difference in the recovery times between the two protocols increases
when MPLS local protection in used (instead of MPLS global protection).
MPLS local recovery can be achieved within intervals on the order of
tenths of seconds compared to intervals of seconds for OSPF.
The order of magnitudes for the recovery times varies between hundred of
milliseconds to minutes in OSPF, and from hundred of milliseconds to
seconds in MPLS.
430 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
What you should have learned
The MPLS protection schemes are: 1+1, 1:1, 1:N and M:N. They are listed
in a decreasing degree of the network availability they can achieve. In all of
these strategies, the protection path must be disjointed from the
corresponding working path.
The functionalities needed in an MPLS recovery solution are monitoring,
detection, notification, switching, routing and switching back.
RSVP-TE Path message is used to signal a new LSP. The Path message
includes the explicit route that must be followed by that message. RSVP
states associated with the LSP are created on each node traversed by the
LSP. In order to protect against certain types of failures, these states are
periodically refreshed during the lifetime of that LSP.
RSVP-TE augmented RSVP monitoring, detection and notification
mechanisms with a keep-alive protocol, namely the RSVP-TE Hello
protocol.
RSVP-TE monitoring and detection mechanisms do not detect certain
types of faults that are related to the MPLS forwarding table corruption,
and, therefore, are complemented with ITU-T Y.1711, which is an ITU
MPLS monitoring, detection and notification mechanism.
The topology scope of a recovery solution can be Global, also called end-
to-end, or Local. The latter achieves faster recovery times than the former,
but it is more complex to optimize the network resource utilization with
Local than with Global.
Chapter 18 MPLS Recovery Mechanisms 431
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
References
RFC 2205, R. Braden et al., Resource ReSerVation Protocol (RSVP),
IETF, September 1997, http://www.ietf.org/rfc/rfc2205.txt
RFC 3209, D. Awduche et al., RSVP-TE: Extensions to RSVP for LSP
Tunnels, IETF, December 2001, http://ietf.org/rfc/rfc3209.txt
FRR_atlas, Alia Atlas et al., Fast Reroute Extensions to RSVP-TE for
LSP Tunnels, IETF draft-ietf-mpls-rsvp-lsp-fastreroute-01.txt, IETF.
ITU-T Recommendation Y.1711, OAM Mechanism for MPLS
Networks, Study Group 13, International Telecommunication Union
Telecommunication Standardization Sector (ITU-T), February 2002.
Informative references
MPLS-ARCH, Multi-protocol Label Switching Architecture, E. Rosen et
al. Request for Comments: 3031 http://ietf.org/rfc/rfc3031.txt, January
2001
MPLS-RECOV, Framework for MPLS-based Recovery, Vishal Sharma
et. al., <draft-ietf-mpls-recovery-frmwrk-05.txt>, http://search.ietf.org/
internet-drafts/draft-ietf-mpls-recovery-frmwrk-05.txt
NET-SUSRVIV, Network Survivability Considerations for Traffic
Engineered IP Networks, Ken Owens et al., <draft-owens-te-network-
survivability-03.txt>, http://search.ietf.org/internet-drafts/draft-owens-te-
network-survivability-03.txt
MPLS-TE, Requirements for Traffic Engineering Over MPLS, D.
Awduche et al., IETF RFC-2702http://www.ietf.org/rfc/rfc2702.txt, Sept.
1999
INTERNET-TE, Overview and Principles of Internet Traffic
Engineering, D. Awduche et al.,Request for Comments: 3272 http://
www.ietf.org/rfc/rfc3272.txt, May 2002
432 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
433
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Chapter 19
Implementing QoS: Achieving
Consistent Application Performance
Ralph Santitoro
Figure 19-1: Transport path diagram
Implementing QoS involves mapping QoS mechanisms between the Layer
3 and Layer 2 protocols. This is done hop-by-hop. Additionally, end-to-end
QoS policies must be implemented and administered in the form of
network service classes. These tie together both the nodal (hop) QoS
mechanisms and performance, and network (transmission) QoS
performance required to achieve good end-to-end QoS for applications and
services. The transport path diagram highlights topics covered in this
chapter.
MPLS
A
u
d
i
o
V
o
i
c
e
V
i
d
e
o
codec
RTP
R
T
C
P
R
T
S
P
S
I
P
H
.
3
2
3
H
.
2
4
8
/
M
G
C
P
/
N
C
S
R
e
a
l
-
T
i
m
e
C
o
n
t
r
o
l
UDP / TCP / SCTP
IP
ATM
AAL1/2 AAL5
FR Ethernet
SONET / TDM
Copper WLAN Cellular xDSL
DOCSIS
HFC
Q
o
S
R
e
s
i
l
i
e
n
c
y
Media Related
Session
Control
Gateway
Control
Media Session Related
Fiber
Packet
Cable
View From
Application
Perspective
To
Ntwk
MPLS
A
u
d
i
o
V
o
i
c
e
V
i
d
e
o
codec
RTP
R
T
C
P
R
T
S
P
S
I
P
H
.
3
2
3
H
.
2
4
8
/
M
G
C
P
/
N
C
S
R
e
a
l
-
T
i
m
e
C
o
n
t
r
o
l
UDP / TCP / SCTP
IP
ATM
AAL1/2 AAL5
FR Ethernet
SONET / TDM
Copper WLAN Cellular xDSL
DOCSIS
HFC
Q
o
S
R
e
s
i
l
i
e
n
c
y
Media Related
Session
Control
Gateway
Control
Media Session Related
Fiber
Packet
Cable
View From
Application
Perspective
To
Ntwk
434 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Concepts covered
Mapping DiffServ to Layer 2 QoS mechanisms
Mapping DSCP to and from 802.1p user priorities
Mapping DiffServ to frame relay
Mapping DiffServ to ATM
Mapping DiffServ to PPP classes
Mapping DiffServ to MPLS E-LSPs and L-LSPs
Application performance requirements
Categorizing applications based on end user expectations and
performance objectives
Making QoS simple with network service classes
Introduction
Chapter 10 describes the many mechanisms that are used to provide good
QoS performance for real-time applications. Chapter 10 looked at QoS
from a bottom-up approach. This chapter takes a top-down approach to
implementing end-to-end QoS policies. First, the mapping between
DiffServ (IP QoS) and various Layer 2 QoS mechanisms are discussed.
This is followed by a discussion of the performance requirements and
categorization of applications supported over a converged network. Finally,
an approach to simplify QoS is discussed, which is based on network
services classes that provide common QoS policies for popular real-time
and nonreal-time applications with similar QoS performance
requirements.
Mapping DiffServ to Link Layer (Layer 2) QoS
QoS is provided at Layer 3 (IP) using the Differentiated Services
(DiffServ) architecture as described in Chapter 10. IP packets traverse
different link layers (for example, Ethernet, PPP, ATM and frame relay)
from source to destination host. The Layer 2 networking device may not be
able to read or interpret the DiffServ Codepoint (DSCP) subsequently
referred to as a nonDiffServ-capable device. Therefore, end-to-end QoS
requires mapping from the DSCP and DiffServ traffic management
mechanisms, for a given DiffServ PHB, to the appropriate link layer QoS
markings and traffic management mechanisms. By providing a consistent
QoS mapping policy between DiffServ and the Layer 2 QoS mechanisms,
consistent QoS performance can be achieved for packets traversing
different Layer 2 networks.
QoS can only be achieved by ensuring that at each hop in the network, each
Network Element (NE) applies consistent treatment to traffic flows over
the different link layers (See Figure 19-2). The most commonly used link
layer technologies are Ethernet, frame relay, ATM and PPP.
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 435
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Figure 19-2: QoS, hop-by-hop across different link layers
Mapping DiffServ to Ethernet
The IEEE 802.1Q standard provides the 802.1p user priority and VLAN ID
fields that can be used for QoS (described in Chapter 10). VLANs allow for
the logical grouping of users or devices with similar QoS or security
requirements. VLANs most commonly allow for traffic separation and QoS
based on the particular Ethernet switch port to which a user is connected
(called port-based VLANs). VLANs can also be created based on Ethernet
source or destination MAC addresses, protocol types or other user-defined
information for the Ethernet switches to classify in the packets.
Typically, the QoS mechanism to identify which class of service to provide
is obtained from the Ethernet frame's 802.1 user priority field. Therefore,
the DSCP should be mapped to the 802.1p user priority consistently across
the network. This mapping is important because some Ethernet switches
may not be able to read the DSCP in the Ethernet frames payload yet be
able to interpret the 802.1p value in the Ethernet frames header.
Mapping DSCP to 802.1p
A DSCP may already be marked in a packet by a trusted source; that is, the
packet has already been classified and marked by a trusted NE somewhere
else in the network. The DSCP may be remarked based upon the QoS
policy setting or if the traffic is using more than the prescribed bandwidth
for its CoS. A packet may also enter an Ethernet interface on a DiffServ-
capable NE marked with both a zero DSCP value and zero 802.1p user
priority value. Based on the network's QoS policy, the DiffServ-capable NE
should mark the packet's DSCP and 802.1p user priority prior to the packet
egressing an Ethernet interface.
Mapping the DSCP to 802.1p user priorities enables next hop, non
DiffServ-capable NEs to use the 802.1p value to determine the proper QoS
mechanisms to apply. Therefore, this Layer 2 NE relies on the DiffServ-
capable NE to map the appropriate DSCP to 802.1p user priority. Refer to
Figure 19-3.
436 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
.
Figure 19-3: DSCP to 802.1p mapping scenario
A default QoS mapping policy between DSCP and 802.1p is useful for
several reasons.
The DSCP remains with the IP packet across all Layer 2 networks.
Therefore, it should be used to derive the 802.1p value.
Most Ethernet switches support 802.1Q. They may also be non
DiffServ-capable NEs that cannot interpret the DSCP value.
Therefore, the 802.1p value is the only QoS marking that can be
used to determine the proper QoS mechanisms to apply.
Table 19-1 provides a default mapping of DSCP to 802.1p. Note that
802.1p value one is not used in this mapping. Also, packets marked with
either CS7 or CS6 DSCPs are typically used for network control traffic. In
the DSCP to 802.1p mapping, both are marked with 802.1p to maximize
the number of 802.1p values available for other applications. Most Ethernet
switches do not support drop precedence encoding using 802.1p.
Therefore, the drop precedence encoded in the DSCP for a given AF PHB
class (for example, AF11, AF12, and AF13) cannot be differentiated using
802.1p. In this case, all DSCP values for the given AF PHB class are
mapped to the same 802.1p value as illustrated in Table 19-1.
Table 19-1: Mapping DSCP to 802.1p
DSCP
Maps to
802.1p val ue
CS7
CS6
7
EF, CS5 6
AF41, AF42, AF43, CS4 5
AF31, AF32, AF33, CS3 4
AF21, AF22, AF23, CS2 3
AF11, AF12, AF13, CS1 2
DF, CS0, all undefined DSCPs 0
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 437
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Mapping 802.1p to DSCP
A packet may arrive from a downstream nonDiffServ-capable NE with its
DSCP set to zero (Default Forwarding DSCP; that is, best effort class) but
have an 802.1p user priority set. Once the packet ingresses a DiffServ-
capable NE, the packet may be classified and marked with or mapped to a
DSCP based upon network's QoS policy. Refer to Figure 19-4.
Figure 19-4: 802.1p to DSCP mapping scenario
A default QoS mapping policy between 802.1p and DSCP is useful for
several reasons.
The 802.1p value only has local significance over the Ethernet
segment. Once the Ethernet frame header is removed by a router,
the 802.1p value is lost. However, a DSCP value remains with the
IP packet.
Other DiffServ-capable devices may support a DSCP marking but
may not support or not have their Ethernet interfaces configured to
support 802.1Q (and hence no 802.1p support).
Table 19-2 provides the recommended default mapping between 802.1p
and DSCP if the incoming packet has no DSCP marked or if a QoS policy
is set to trust the 802.1p value and ignore the DSCP value. Note that the
802.1p values map to AF PHB group DSCPs with the lowest drop
precedence; for example, 802.1p 5 maps to AF41 (instead of AF42 or
AF43). This is done because 802.1p does not support the concept of drop
precedence. Finally, a custom DSCP may be used to map to 802.1p 1 since
it is not used for any of the standard DSCP values. If no custom DSCP is
defined, 802.1p should be mapped to the DF DSCP.
438 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Table 19-2: Mapping 802.1p to DSCP
Scenarios for Mapping DSCP to/from 802.1p
There are different scenarios for when DSCP to 802.1p or 802.1p to DSCP
mapping is required. In Figure 19-5, NE-1 and NE-2 are a nonDiffServ-
capable NEs that can only mark 802.1p and are the source and sink of
traffic. NE-3 and NE-4 are DiffServ-capable devices. NE-3 can mark and
interpret both DSCP and 802.1p values. NE-4 connects to other DiffServ-
capable NEs in the network. When NE-3 receives a packet from either NE-
1 or NE-2, only the 802.1p value is marked. Therefore, NE-3 needs a
default map between the 802.1p value and a DSCP value as illustrated in
Table 19-2. Note that NE-3 could also ignore the 802.1p value and classify
the packet to determine the DSCP value if this approach is specified in the
network QoS policy. In the reverse direction, when NE-3 is sending packets
to NE-1 or NE-2, it must map the DSCP to an 802.1p value since NE-1 and
NE-2 are nonDiffServ-capable devices and can only interpret the 802.1p
value to apply the appropriate QoS mechanisms.
Finally, packets sent between NE-3 and NE-4 only need to use the DSCP
value to apply the appropriate DiffServ QoS mechanisms because both
devices are DiffServ-capable. The DSCP value is preserved if both
interfaces belong to the same 'trusted' administrative domain. Otherwise,
the DSCP may be remarked based on the respective network's QoS policy.
802.1p User
Pri ori t y
Maps t o
DSCP
7 CS6
6 EF
5 AF41
4 AF31
3 AF21
2 AF11
1
DF or
custom
0 DF
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 439
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Figure 19-5: Scenarios for mapping DSCP to/from 802.1p
Mapping DiffServ to Frame Relay
Frame relay supports traffic management parameters to specify the rate
(CIR) and committed burst size (bc) and excess burst size (be) supported
for a particular Permanent Virtual Circuit (PVC). However, some frame
relay service offerings specify a zero CIR, in which case the
aforementioned traffic management parameters are meaningless.
One frame relay traffic management parameter in the frame relay header
that affects packet loss is called Discard Eligible (DE). The DE parameter
is useful regardless of whether the service specifies a zero or nonzero
CIR. frame relay interfaces use DE to determine whether incoming frame
relay frames may be discarded under congestion. When DE=0, the
incoming frame should not be discarded. When DE=1, the incoming frame
may be discarded if the network element is under congestion.
Mapping DSCPs to the appropriate frame relay DE value is important to
maintain consistency between IP and frame relay traffic management.
Table 19-3 provides the mapping between the standard DSCP values and
frame relay DE value.
440 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Table 19-3: Mapping DSCP to frame relay DE value
Mapping DiffServ to ATM
ATM supports service categories, each with different traffic management
parameters and delay, delay variation and loss performance objectives. The
most widely available ATM service categories are Constant Bit Rate
(CBR), rt-VBR (real-time Variable Bit Rate), nrt-VBR (nonreal-time
VBR) and UBR (Unspecified Bit Rate). In general, CBR is used for circuit
emulation services (including circuit-based voice or circuit-based video
transport), rt-VBR is used for real-time packet-based voice or packet-based
video applications, nrt-VBR is used for priority data applications, and UBR
is used for best-effort data applications.
An important ATM traffic management parameter in the ATM header that
affects packet loss is the Cell Loss Priority (CLP). Similarly to frame
relay's DE parameter, ATM interfaces use CLP to determine whether
incoming ATM cells may be discarded under congestion. When DE=0, the
incoming cells should not be discarded. When DE=1, the incoming cell
may be discarded if the network element is under congestion.
Mapping packets marked with DSCPs to the appropriate ATM service
category and DSCP values to the appropriate CLP value is important to
maintain consistency between IP and ATM traffic management and service
performance. Table 19-4 provides the mapping between the standard DSCP
values and the ATM service categories and CLP values.
DiffServ Codepoint
(DSCP)
Frame
Relay DE
CS7, CS6 0
EF, CS5 0
AF41, CS4 0
AF42, AF43 1
AF31, CS3 0
AF32, AF33 1
AF21, CS2 0
AF22, AF23 1
AF11, CS1 0
AF12, AF13 1
DF, CS0 1
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 441
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Table 19-4: Mapping DSCP-marked packets into ATM Service Categories
and CLP
12
Mapping DiffServ to PPP Class Numbers
The Point-to-Point Protocol (PPP) with multiclass extensions used for CoS
markings over PPP connections. PPP fragmentation and interleaving is
required when a network supports both real-time and nonreal-time
applications over a converged network with low-bandwidth (< 1 Mbps)
connections. In this use case, the nonreal-time IP data packets are
fragmented and interleaved to reduce the queuing delay of the real-time
packets as described in Chapter 10. Each packet fragment is assigned a PPP
Class number that identifies to which CoS the packet fragment belongs.
This information is used during the packet reassembly process where the
PPP connection is terminated.
There are two ways to map DiffServ to PPP Class numbers: one using the
short sequence number format and the other using the long sequence
number format.
1. In general, packets marked with the CS7 DSCP should be mapped to rt-VBR. However, there are
critical protocols that provide constant rate heartbeats that require the lowest loss and delay for
optimal network. Such protocol packets should be mapped to CBR.
2. In general, packets marked with the EF DSCP should be mapped to rt-VBR. However, circuit
emulation over IP application packets marked with the EF DSCP should be mapped to CBR2.
DiffServ Codepoint
(DSCP)
ATM Service
Category
ATM
CLP
CS7 rt-VBR (or CBR
1
) 0
CS6 rt-VBR 0
EF rt-VBR (or CBR
2
) 0
CS5 rt-VBR 0
AF41, CS4 rt-VBR 0
AF42 rt-VBR 1
AF43 rt-VBR 1
AF31
,
CS3 rt-VBR 0
AF32 rt-VBR 1
AF33 rt-VBR 1
AF21, CS2 nrt-VBR 0
AF22 nrt-VBR 1
AF23 nrt-VBR 1
AF11, CS1 nrt-VBR 0
AF12 nrt-VBR 1
AF13 nrt-VBR 1
DE, CS0 UBR 1
442 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
The short sequence number format (four classes) is used for connections
operating at less than 1 Mbps. Popular low-speed WAN connections in use
today are 56 kbps, 64 kbps, 128 kbps, 256 kbps and 384 kbps. Additionally,
popular DSL services use PPP over Ethernet (PPPoE) over DSL
connections, which have symmetrical or asymmetrical connections speeds
similar to the aforementioned low-speed WAN connections.
The short sequence number format only allows a subset of the DiffServ
PHBs to be supported. Table 19-5 provides an example mapping. Other
mapping arrangements are possible. Typically, the EF-marked packets are
marked with the PPP class number corresponding to the PPP class that
provides the lowest forwarding delay.
Table 19-5: DSCP to PPP class number short sequence format example
The long sequence number format (sixteen classes) is not practical to use
with low bandwidth connections because there is insufficient usable
bandwidth per class (application); for example, 128 kbps / 16 classes =
8 kbps per class (application). Additionally, packet fragmentation and
interleaving is not required over high bandwidth (>1 Mbps) connections.
Mapping DiffServ to MPLS
MPLS uses the DiffServ architecture to provide QoS. MPLS introduces
some new DiffServ terminology that adds more precision to the DiffServ
PHB definition. One important new term is the PHB Scheduling Class
(PSC). A DiffServ PSC is a set of one or more DiffServ PHBs that must
follow the same packet ordering constraints. For example, the AF1 PSC
consists of the AF11, AF12 and AF13 PHBs, which means that all AF1x-
marked packets (where x = 1, 2 or 3) must be sent in the order that they
were received. The EF PSC consists of only the EF PHB, and the DF PSC
consists of only the DF PHB. Finally, the CS PSC consists of 8 PHBs
identified by CS0CS7 DSCPs.
MPLS allows for two different types of LSPs, each defining a different
interpretation of the EXP bits.
The most common type of LSP is the EXP-Inferred-PSC LSP (E-LSP)
where the EXP bits are used to indicate up to eight PSCs; that is, PHB plus
drop precedence for a given E-LSP in an administrative domain. The
DiffServ Codepoint (DSCP)
PPP Cl ass Number
(short sequence
number format)
EF 3
CS7, CS6, CS5 2
AF41, AF42, AF43 1
DE, CS0 0
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 443
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
mapping of the EXP bits to PSC can be either explicitly signaled during
label setup or statically configured. Since there are many ways to configure
the EXP bits to support PSCs, a flexible mapping approach is required.
Table 19-6 provides an example DSCP to EXP mapping that maximizes the
number of DiffServ PSCs that can be supported.
Table 19-6: DSCP to EXP mapping for E-LSPs supporting the most PSCs
The limitation of the mapping in Table 19-6 is that it does not account for
the drop precedence indication encoded in DSCPs in each AF PHB; for
example, packets marked with AF12 and AF13 DSCPs should be discarded
(have a higher drop precedence) before packets marked with the AF11
DSCP. An alternative mapping approach that accounts for drop precedence
in the AF PHB group is shown in Table 19-7. Note that the mapping in
Table 19-7 can only accommodate fewer DiffServ PSCs because multiple
EXP bits are used for the same PSC to indicate drop precedence; that is, the
packet can be discarded under congestion.
Table 19-7:Example DSCP to EXP mapping for E-LSPs supporting drop
precedence
DiffServ
PSC
DiffServ Codepoint (DSCP)
EXP value
(for E-LSP)
CS CS7 7
CS CS6 6
EF EF, CS5 5
AF4 AF41, AF42, AF43, CS4 4
AF3 AF31, AF32, AF33, CS3 3
AF2 AF21, AF22, AF23, CS2 2
AF1 AF11, AF12, AF13, CS1 1
DF DF, CS0 0
DiffServ
PSC
DiffServ Codepoint
(DSCP)
EXP value
(for E-LSP)
Discard under
congestion?
CS CS7, CS6 7 No
EF EF, CS5 6 No
AF41, CS4 5 No
AF4
AF42, AF43 4 Yes
AF31, CS3 3 No
AF3
AF32, AF33 2 Yes
CS CS2 1 No
DF DF, CS0 0 Yes
444 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
The other type of LSP is the Label-Only-Inferred-PSC LSP where the
MPLS label value determines the PSC and the EXP field in the MPLS shim
header determines the drop precedence for the L-LSP in an administrative
domain. L-LSPs allow for more flexibility because the number of PSCs is
limited only by the number of labels. Refer to Table 19-8 for an example
mapping of DSCP to MPLS label and EXP for L-LSPs using labels 100-
110.
Table 19-8: Example DSCP to EXP mapping for L-LSPs using labels 100-110
E-LSPs support many DiffServ PSCs per E-LSP compared to L-LSPs that
support only one DiffServ PSC per L-LSP. Fewer LSPs simplify network
operations, administrations, management and provisioning (OAM&P),
resulting in lower cost of operations. This is why E-LSPs are
predominantly used when implementing QoS in MPLS networks.
Application Performance Requirements
Over a converged IP network, the QoS performance requirements for both
real-time and nonreal-time applications must be considered. It cannot be
assumed that only real-time applications are given good QoS performance.
There are also nonreal-time mission critical applications that also require
good QoS performance from the converged network.
Table 19-9 illustrates the QoS performance dimensions required by some
popular applications, each with different QoS performance requirements.
Without applying proper QoS technologies over a common, converged IP
network, these applications will receive unpredictable performance,
DiffServ
PSC
DiffServ
Codepoint
(DSCP)
MPLS
L-LSP
Label
EXP
value (for
L-LSP)
Discard
under
congestion?
CS CS7, CS6 100 0 No
EF EF, CS5 101 0 No
AF41, CS4 102 0 No
AF4
AF42, AF43 103 1 Yes
AF31, CS3 104 0 No
AF3
AF32, AF33 105 1 Yes
AF 21, CS2 106 0 No
AF2
AF22, AF23 107 1 Yes
AF11, CS1 108 0 No
AF1
AF12, AF13 109 1 Yes
DF DF, CS0 110 1 Yes
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 445
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
ultimately resulting in unacceptable user satisfaction or business
productivity.
Table 19-9: Application Performance Dimensions
Categorizing Applications
Networked applications can be categorized based on end-user expectations
or application performance requirements. Some applications are between
people while other applications are between a person and a networked host;
for example, a PC (user) and a web server. Finally, some applications are
between networking devices (for example, router to router).
Applications can be divided into four different traffic categories: namely,
Network Control, Interactive, Responsive and Timely. Refer to Table 19-
10. The table includes some representative applications in the different
categories.
Table 19-10: Categorization of Applications into Traffic Categories
Interactive Applications
Some applications are interactive; whereby, two or more people actively
participate. The participants expect the networked application to respond in
real time. In this context, real time means that there is minimal delay
(latency) and delay variation (jitter) between the sender and receiver. Some
interactive applications, such as a telephone call, have operated in real time
Performance Dimensions
Sensitivity to
Example Applications
Bandwidth
Needs Delay Jitter Loss
IP Telephony (VoIP)
Low High High Med-High
Interactive Video
Conferencing
Med-High High High High-Med
Streaming Video on Demand Med-High Med Med Med
Streaming Audio (Webcasts) Low Med Low Med
Client / Server Transactions Med Med Low High
Email Low Low Low High
File transfer Med-High Low Low High
Traffic Category Example Application
Network Control Critical Alarms, Routing, Billing, Critical OAM
Interactive VoIP, Interactive Gaming, Video Conferencing
Responsive Streaming audio/video, Client/Server Transactions
Timely Email, Non-critical OAM
446 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
over the telephone companies' circuit switched networks for decades. The
QoS expectations for real-time voice applications have been set and,
therefore, must also be achieved as voice applications are migrating from
being circuit-based to being packet-based (for example, VoIP).
Other interactive applications include video conferencing and interactive
gaming. Since the interactive applications operate in real time, packet loss
must also be minimized. Imagine a telephone call where whole or partial
words regularly get lost during the conversation. This level of QoS
performance would not only be unsatisfactory but would make the
application (telephone call) not very usable.
Interactive applications typically use Universal Datagram Protocol (UDP)
and, hence, cannot retransmit lost or dropped packets as with Transport
Control Protocol (TCP)-based applications. However, lost packet
retransmission would not be beneficial because interactive applications are
time-based. For example, if a voice packet was lost, it doesn't make sense
for the sender to retransmit it because the conversation has progressed in
time and the lost packet might be from part of the conversation that had
already passed in time.
Responsive Applications
Some applications are between a person and a networked host or
application. End users require these applications to be responsive, so a
request sent to the networked host requires a relatively quick response back
to the sender. These applications are sometimes referred to as being near
real-time and require relatively low packet delay, jitter and loss. However,
QoS performance requirements for responsive applications are not as
stringent as for the interactive (real-time) applications. This category
includes streaming media and client/server transaction-oriented
applications.
Streaming media applications (for example, movies on demand or
webcasts) require the network to be responsive when they are initiated so
the user doesn't wait too long before the media begins playing. These
applications also require the network to be responsive for certain types of
signaling. For example, with movies on demand, when one changes
channels or forwards, rewinds or pauses the media, one expects the
application to react similarly to the response time of their 'standalone' video
player controls.
Web-based applications involve a user selecting a hyperlink to jump to a
new page or submit information; for example, place an order or submit a
request. These applications also require the network to be responsive, such
that once the hyperlink is selected, a response (for example, a new page
begins loading) occurs typically within one to two seconds. With
broadband Internet access connections, this type of performance is often
achieved over a best-effort network, albeit somewhat inconsistently. Other
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 447
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
responsive applications include a financial transaction; for example, place
credit card order and quickly provide feedback to the user indicating the
transaction has completed. Otherwise, the user may be unsure that the
transaction completed and attempt to initiate another (duplicate) order
unknowingly. Alternatively, the user may assume that the order was placed
correctly, but it may not have been placed at all. In either case, the user
would be dissatisfied with the network or application's performance.
Responsive applications can use either UDP or TCP-based transport.
Streaming media applications typically use UDP (but can also use TCP).
Web-based applications are based on the Hypertext Transport Protocol
(HTTP) and always use TCP. For Web-based applications, packet loss is
managed by TCP, which retransmits lost packets. Retransmission of lost
streaming media packets is typically handled by application-level protocols
as long as the media is sufficiently buffered. If not, then the lost packets are
discarded resulting in some distortions in the audio or video media.
Timely Applications
Some applications between a person and networked host or application
require 'timely' and reliable delivery of the information in 'minutes' instead
of 'seconds'. Such applications include e-mail and file transfer. The relative
importance of these applications is based on their business priorities.
These applications require that the packets arrive within a bounded delay.
For example, if an e-mail takes a few minutes to arrive at its destination,
this is acceptable. However, in a business environment, if an e-mail took
ten minutes to arrive at its destination, this may be unacceptable. The same
bounded delay applies to file transfers. Once a file transfer is initiated,
delay and jitter are less critical because file transfers often take many
minutes to complete. Note that these timely applications use TCP-based
transport and, therefore, packet loss is managed by TCP, which retransmits
any lost packets resulting in no packet loss.
Timely applications expect the network to provide packets with a bounded
amount of delay. Jitter has a negligible effect on these types of applications
and packet loss is reduced to zero due to TCPs loss recovery mechanisms.
Network Control Applications
Some applications are used to control the Operation, Administration and
Maintenance (OAM) of the network. Such applications include network
routing protocols (for example, RIP and OSPF), network alarms and logs
retrieved using SNMP or Telnet sessions, and network device
configurations configured using SNMP, HTTP, COPS or Telnet sessions.
These applications can be subdivided into those required for critical and
those for standard network operating conditions. To create highly available
networks, network control applications are typically given some amount of
network resources (bandwidth) to ensure delivery and minimize packet loss
448 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
and delay. This is done because the network must be operating properly in
order for it to provide proper QoS performance for the end-user
applications.
Network control applications require a relatively low amount of delay.
Jitter has a negligible effect on these types of applications and packet loss
must be minimized since some of these applications are not transported via
TCP and, hence, do not have packet loss recovery mechanisms.
Making QoS Simple via Networks Service Classes
Configuring routers and networking equipment requires a great deal of
expertise gained through training and implementation experience. Very
frequently, the networking terminology is neither intuitive nor obvious and
requires further research into product documentation or Internet standards
before even basic configuration can be accomplished. The IP QoS achieved
by the DiffServ architecture provides a new set of terminology and a
toolbox of IP QoS technologies that, unfortunately, are also not very
intuitive. Implementing good end-to-end QoS can be complex because of
the many technologies, standards and implementation choices to consider.
Furthermore, there is no single QoS technology or standard that can be
used across a network given the various Layer 2 technologies (for example,
Ethernet, frame relay, ATM, and PPP, used to transport the IP traffic).
This section describes how applications can be categorized based on their
QoS performance requirements and how network QoS policies can be put
in place to meet these requirements in order for the applications to perform
well and meet end-user expectations. This application categorization
schema, developed by Nortel, provides a service class nomenclature per
application type and default QoS mechanisms per service class to simplify
the deployment of converged networks. Since the service classes provide
network-wide QoS policies, they are referred to as network service classes
(NSC).
These default QoS performance settings are specified per NSC. The
network administrator can then match the QoS performance provided by
the NSC to the applications requirements. For example, it may not be
obvious that the DiffServ EF PHB should be applied to VoIP traffic and the
AF PHB should be applied to TCP-based traffic, such as Web traffic.
Furthermore, end-to-end QoS requires more than just IP QoS, and DiffServ
only defines a subset of the end-to-end QoS treatment that traffic will
receive or need to receive. For example, as traffic traverses over different
link layers such as Ethernet, frame relay or ATM, it will encounter devices
that can only support link layer QoS mechanisms and do not support
DiffServ Layer 3 (IP) QoS. The NSCs provide default settings for DiffServ
PHBs and their mapping to the most popular link layer QoS mechanisms.
IP packets are classified and placed into a particular NSC that best matches
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 449
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
the application's performance needs resulting in consistent QoS treatment
at both Layer 3 (IP) and Layer 2.
NSCs can be thought of as default QoS policies built into the product.
Furthermore, a common set of NSC names permit a network administrator
to configure (for example, the Premium NSC service on products A, B and
C) without having to understand how each product implements the required
behaviors. The network administrator simply configures the networking
devices to place the traffic into the appropriate NSC that provides the
closest performance behavior required by the application.
Once the network is engineered to support the QoS performance per NSC,
the NSCs provide default, network-wide QoS policies. Each router or
switch is configured with default configurations for the QoS mechanisms
that best match the requirements for the applications in each NSC. The
NSCs reduce the possibility of configuration errors because the NSCs
account for different QoS mechanisms used in the different products that
may be used across the network. Nortel's products incorporate the NSCs.
However, since NSCs are standards-based, they can be implemented on
network elements that do not support them. This is accomplished by
manually configuring the QoS mechanisms required to match the QoS
performance of the NSC.
Table 19-11 illustrates how different applications are grouped into four
broad categories called Network Control, Interactive, Responsive and
Timelyeach with a common set of performance characteristics. The
Network Control category is for applications performing OAM of the
network. The remaining three categories are for end-user applications.
450 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Table 19-11: Network Service Class Performance Objectives and Target
Applications
Performance Characteristics
T
r
a
f
f
i
c
C
a
t
e
g
o
r
y
N
S
C
Target Appli cations
D
e
l
a
y
T
o
l
e
r
a
n
c
e
J
i
t
t
e
r
T
o
l
e
r
a
n
c
e
L
o
s
s
T
o
l
e
r
a
n
c
e
T
r
a
f
f
i
c
P
r
o
f
i
l
e
C
r
i
t
i
c
a
l
Critical heartbeats between nodes Very Low N/A
Very
Low
Small
packets
N
e
t
w
o
r
k
C
o
n
t
r
o
l
N
e
t
w
o
r
k
ICMP, OSPF, BGP, RIP, ISIS
COPS, RSVP
DNS, DHCP, BootP, high priority
OAM
Low N/A
Low to
Very
Low
Variable-
sized
packets
P
r
e
m
i
u
m
VoIP (G.711, G.729 and other
codecs)
Telephony signaling between
gateway or end device and call
server (H.248, MGCP, H.323, SIP)
Lawful Intercept
T.38 Fax over IP
Circuit Emulation over IP
Very Low Very Low
Very
Low
Typically,
small
fixed-
sized
packets
I
n
t
e
r
a
c
t
i
v
e
P
l
a
t
i
n
u
m
Interactive Video (Video
Conferencing)
Interactive Gaming
Low to
Very Low
Low
Very
Low
Typically,
large
variable-
sized
packets
G
o
l
d
Streaming audio, Webcasts
Streaming video (video on demand)
Broadcast TV
Pay per view movies and events
Video surveillance and security
High High
Very
Low to
Low
Variable-
sized
packets
R
e
s
p
o
n
s
i
v
e
S
i
l
v
e
r
Client / Server applications
SNA terminal to host transactions
Web-based ordering
Credit card transactions, wire
transfers
ERP applications (SAP / BaaN)
Low-
Medium
N/A
Low to
Very
Low
Variable-
sized
packets
B
r
o
n
z
e
Email
Billing record transfer
Non critical OAM&P (SNMP,
TFTP)
High N/A Low
Variable-
sized
packets
T
i
m
e
l
y
S
t
a
n
d
a
r
d
All traffic not in any of the other
classes
Best effort traffic
Typically
Not
Specified
N/A
Typically
Not
Specified
Variable-
sized
packets
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 451
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Network Control Traffic Categories
Network Control traffic is not of interest to an end-user but is important for
network operations. Network Control traffic is different from application
control (signaling) required for some user applications. Network elements
(for example, switches and routers) initiate and terminate the Network
Control traffic. For example, OSPF routing table updates being propagated
across a network are considered Network Control traffic because they do
not originate from an end-user application. Conversely, at call setup, the
SIP protocol signaled between an IP phone and a call server that controls it
is considered End-User traffic and not Network Control traffic. Network
Control traffic typically requires little bandwidth but must be delivered
across the network to keep the network operational.
End-User Traffic Categories
End-user or subscriber traffic is subdivided into three categoriesnamely,
Interactive, Responsive and Timely. Interactive traffic, which is between
two people, is most sensitive to delay, loss and jitter. Responsive traffic is
typically between a person and a host. Responsive traffic is less affected by
jitter and can tolerate longer delays than Interactive traffic. Timely traffic is
either between hosts or between hosts and people, and the delay tolerance
is significantly longer than Responsive traffic. To put this into perspective:
Interactive Traffic requires a delay performance on the order of tens
of milliseconds
Responsive Traffic requires a delay performance on the order of
hundreds of milliseconds
Timely Traffic requires a delay performance on the order of
seconds to minutes.
Network operators can categorize their applications based on the QoS
performance that they require. Table 19-11 provides some common
applications and the NSCs that best support them based on their
performance requirements. For example, the Premium NSC is best suited
for IP telephony applications. The Platinum NSC is best suited for Video
Conferencing and Interactive Gaming applications. The Gold NSC is best
suited for streaming media applications. The Silver NSC is best suited for
Interactive Client/Server applications. The Bronze NSC is best suited for
store and forward applications such as e-mail. The Standard NSC is best
suited for applications not requiring or expecting any QoS from the
network; that is, best effort treatment.
In addition to providing default configurations for QoS mechanisms, the
NSCs provide default mapping between DiffServ and different link layer
QoS technologies that a particular interface uses (for example, 802.1p for
an Ethernet interface). Finally, the network must be engineered to support
the QoS performance objectives (maximum packet delay, packet jitter and
452 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
packet loss) for each NSC. Network engineering considerations include
transmission delay of each network connection in addition to the number of
routers or switches (hops) that the packet must traverse across the network.
NSCs provide the following default settings for various nodal QoS
mechanisms:
Mark or remark DiffServ code point
Mark or remark link layer QoS
Provide rate enforcement (policing) configured per DSCP per NSC
Use a particular scheduler
Classify based on a default DSCP
Enable or disable shaping
Apply a particular queue management mechanism
Additionally, the network must be engineered to provide the following QoS
performance objectives for each NSC:
Packet Delay
Packet Jitter (Delay Variation)
Packet Loss Ratio
For example, the Premium NSC is designed to support IP Telephony
applications such as VoIP. Table 19-12 outlines some of the default settings
for nodal QoS mechanisms for the Premium NSC. Note that the link layer
QoS attributes will be specific for a given interface type and the network
performance objectives can vary based on business objectives or end-user
quality targets. For example, for VoIP, the maximum one-way packet delay
could be engineered to be 150 ms and provide good QoS. However, in
some scenarios where bandwidth is very expensive, the maximum one way
delay could be engineered to be 200 ms to account for longer queuing
delays on the low bandwidth WAN connections. While the 150 ms will
provide the best QoS, the 200 ms may provide acceptable QoS while
meeting business cost objectives.
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 453
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Table 19-12: Premium NSC default nodal settings
Table 19-13 provides a summary of traffic conditioning per NSC.
Nodal QoS
Mechanism
Premium NSC default settings
Classifier
For trusted interfaces, all packets marked with Expedited
Forwarding (EF) DSCP or Class Selector (CS) 5 DSCP
are placed into Premium NSC.
Once classified from untrusted interfaces, mark voice
media packets with EF DSCP.
Marker
Once classified from untrusted interfaces, mark voice
signaling packets with CS 5 DSCP.
Policer
Meter packets to the configured rate and committed burst
size; for example, CIR and CBS. Drop packets
exceeding configured rate or burst size.
For Ethernet interfaces, mark all packets with 802.1p
user priority 6.
For ATM interfaces, use ATM service category rt-VBR,
mark all packets with CLP=0.
For Frame Relay interfaces, mark all packets with DE=0.
Link Layer
QoS
For multiclass PPP interfaces, mark all packets with PPP
Class Number 3.
Scheduler Use priority scheduler.
Queue
Management
Use tail drop queue management. Disable Active Queue
Management (AQM) techniques such as WRED.
Shaping Disable shaping.
454 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Table 19-13: Network Service Class Traffic Management Summary
Table 19-14 provides a summary of the default QoS mapping policy
between DSCP and Layer 2 QoS fields. When an incoming packet is
classified and placed into an NSC, it inherits the default QoS mapping and
traffic management policies illustrated in Table 19-13 and Table 19-14.
NSC
DiffServ
PHB
Group
Standard
DSCPs per
NSC
Policing Action for
out of profile traffic
Scheduler
Type
Queue
Mgmt.
Critical CS CS7 Drop Priority 1 Tail Drop
Network CS CS6 Drop Weighted Tail Drop
Premium EF EF, CS5 Drop Priority 2 Tail Drop
Platinum AF4
AF41, AF42,
AF43, CS4
Remark Weighted AQM
Gold AF3
AF31, AF32,
AF33, CS3
Remark Weighted AQM
Silver AF2
AF21, AF22,
AF23, CS2
Remark Weighted AQM
Bronze AF1
AF11, AF12,
AF13, CS1
Remark Weighted AQM
Standard DF DF, CS0
None
(managed by AQM)
Weighted AQM
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 455
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Table 19-14: Network Service Class Default QoS Mapping Policy
3
Table 19-15: Default markings for telephony gateways, terminals, and gateway controllers
3. The Premium NSC traffic uses a higher PPP class number because it is scheduled over a connection
before Critical and Network NSC traffic to minimize any jitter that would be introduced by the
network control traffic over low bandwidth (<1 Mbps) connections.
ATM
Network
Service
Class
DiffServ
Codepoint
(DSCP)
IP
Prec.
(TOS)
CoS CLP
Frame
Relay
DE
PPP Class
Number
(short seq.
format)
Ethernet
802.1p
MPLS
E-LSP
EXP
Critical CS7 7
rt-
VBR
0 0 2 7 7
Network CS6 6
rt-
VBR
0 0 2 7 6
Premium EF, CS5 5
rt-
VBR
0 0 3
3
6 5
AF41, CS4 0 0
Platinum
AF42, AF43
4
rt-
VBR
1 1
1 5 4
AF31, CS3 0 0
Gold
AF32, AF33
3
nrt-
VBR
1 1
1 4 3
AF21, CS2 0 0
Silver
AF22, AF23
2
nrt-
VBR
1 1
1 3 2
AF11, CS1 0 0
Bronze
AF12, AF13
1
nrt-
VBR
1 1
1 2 1
Standard DF 0 UBR 1 1 0 0 0
IP Telephony
Flow Type
Application-to-
Application
Network
Service
Class
Default
DSCP
Default
802.1p User
Priority
Gateway-Gateway Premi um EF 6
Terminal-Terminal Premi um EF 6
Voice Media
(bearer)
Terminal-Gateway Premi um EF 6
Terminal-Terminal Premi um CS5 6
Terminal-Gateway Premi um CS5 6
Gateway-Gateway
Controller
Premi um CS5 6
Voice
Signaling
(Control)
Gateway-Gateway
Controller
Premi um CS5 6
Fax (T.38) Gateway-Gateway Premi um CS5 6
456 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Telephony Gateways and Terminal Configuration
VoIP gateways and terminals (IP phones) can be configured to premark the
VoIP packets for the Premium NSC as illustrated in Table 19-15. The
network will then provide the Premium NSC performance required by
these applications.
Network Element Configuration
All network elements (for example, routers and switches) should place
VoIP traffic into the Premium NSC so the network element applies the QoS
mechanisms best suited for VoIP applications. This reduces the risk of
misconfiguring the QoS mechanisms in the network elements.
Additional QoS Implementation Considerations
Considerations over Low-Bandwidth Connections
There are a number of items to consider when sending real-time packets
over low-bandwidth (< 1 Mbps) access, metro or wide area connections.
This section specifically discusses techniques and recommendations for
such connections. Note that this section specifically discusses VoIP
applications but the techniques described also apply to video applications.
Note, however, that VoIP applications require relatively low bandwidth
when compared to video applications, especially video conferencing.
Therefore, it may not be practical to deploy video conferencing
applications when the connection bandwidth is less than 384 kbps.
Per call bandwidth
The amount of bandwidth used by a VoIP call depends on whether the
voice signal is compressed and which link layer protocol the VoIP packet
uses for transport. To maximize bandwidth utilization, it is desirable to
compress voice signals over low-bandwidth connections. There are several
possible choices for voice compression. The ITU G.729 codec provides
perhaps the best voice quality using the least amount of bandwidth (note
that there are other higher compression codecs such as G.723 but such
codecs do not provide consistently acceptable voice quality). The G.729
codec compresses the voice call from its original 64 kbps down to 8 kbps.
This 8 kbps is the raw voice bandwidth, which is then encapsulated into
an IP payload that results in 24 kbps of IP bandwidth.
Voice compression is typically not used over high bandwidth, Ethernet
connections. Uncompressed voice is encoded using the ITU G.711 codec
with the voice samples encapsulated into an IP packet. This results in 80
kbps of IP bandwidth using 20 ms voice samples. This is quite small
compared to 10 Mbps or 100 Mbps of Ethernet bandwidth.
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 457
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
In addition to the IP bandwidth, the Layer 2 protocols (for example,
Ethernet, PPP, frame relay or ATM) used to transport the IP packet must be
included when calculating the total bandwidth.
Per Call Bandwidth Example
One of the main attractions of VoIP is the ability to use an existing data
network infrastructure. For cost reasons, some Enterprises connect branch
offices using low-bandwidth metro or wide area connections. For these
situations, special considerations must be made when VoIP is added to
these bandwidth-limited connections. When VoIP calls are active, the
routers are typically configured to reduce the data traffic throughput by the
amount of bandwidth for all active VoIP calls (for example, 24 kbps of IP
bandwidth per call). However, this may reduce the data traffic throughput
to an unacceptable level. In this case, adding VoIP to the existing data
network requires increasing the metro or wide area connection bandwidth.
Example: A company has two sites that are connected via a leased line
WAN connection operating at 128 kbps. This bandwidth is sufficient for
the current data requirements. The company believes that it needs about
7080 kbps with occasional traffic peaks to the full 128 kbps. The company
decides to use the G.729 codec to minimize the amount of bandwidth per
voice call and wants to support up to four simultaneous voice calls over the
WAN between the sites. If all four calls were simultaneously active, this
would require 96 kbps of the 128 kbps WAN, leaving only 32 kbps of
bandwidth remaining for the data traffic. Based on the company's business
needs, this is an insufficient amount of data traffic bandwidth.
In general, use the G.729 codec for compressed voice over low-bandwidth
connections. G.729 uses 24 kbps of IP bandwidth for each G.729 call. In
general, the G.711 codec (uncompressed voice) should be used over high
bandwidth connections. G.711 uses 80 kbps of IP bandwidth for each call.
VoIP Delay Budget
The overall one way delay budget for a voice call from the time you
speak to the time the receiver hears your voice should typically be no
longer than 200 ms (with a goal of 150 ms) for good quality voice over
landline connections. The amount of delay is often longer but unavoidable
for satellite and other types of wireless connections. Studies have shown
that as the 200 ms delay budget is exceeded, most users tend to perceive the
delay, resulting in dissatisfaction in voice quality. Every time a VoIP packet
passes through a device or network connection, delay is introduced. A
significant amount of delay can be introduced over low-bandwidth
connections.
458 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Reducing VoIP Bandwidth
Over low bandwidth connections, it is desirable to minimize the amount of
VoIP bandwidth required while still maintaining high quality voice. While
one could use a codec with higher compression than G.729, this often
results in less desirable voice quality. Another technique is to use IP/UDP/
RTP header compression since the IP/UDP/RTP header overhead for a
VoIP packet is quite large. Header compression is only used over the low
bandwidth connection.
For example, two G.729 voice samples are sent per IP packet. This results
in twenty bytes of voice plus forty bytes of IP/UDP/RTP header
information to create a sixty byte IP packet for twenty bytes worth of voice
payload. By using IP/UDP/RTP compression, the header can be reduced to
two bytes (plus Layer 2 header). When using PPP as the Layer 2 protocol,
eight bytes are added to the 22 byte compressed IP packet. This results in a
50% bandwidth savings without sacrificing any voice quality.
Reducing Delay via Packet Fragmentation and Interleaving
In converged voice/data IP networks, packets must be fragmented and
interleaved prior to traversing bandwidth-limited (<<1 Mbps) connections
to minimize the real-time packet delay and the jitter that is introduced by
the longer data packets. It is important for the router to perform both
fragmentation and interleaving. The fragmentation function only breaks up
the larger data packets into fragments. The interleaving function interleaves
one voice packet with each data fragment so there is a fixed (one voice
packet time) amount of queuing delay and jitter. There are several different
protocols that can be used to fragment packets. For frame relay
connections, you can use the frame relay Implementation Agreement
FRF.12 for fragmenting packets. ATM natively provides fragmentation
since all packets are fragmented into 53 byte ATM cells. These only
provide fragmentation, so interleaving is a separate function that must be
configured on the router. There are two additional types of fragmentation
techniques that are not limited to a specific link layer technology, such as
ATM or frame relay. These methods are via PPP with multiclass extensions
and IP fragmentation. PPP fragmentation is the preferred method if the
router supports it. However, if the router does not support PPP
fragmentation and interleaving, IP fragmentation should be used. Refer to
Chapter 10 for a more detailed discussion on both approaches.
Note that with PPP fragmentation, the packets are only fragmented over the
PPP connection. With IP fragmentation, packets are fragmented from
source to destination resulting in reduced application performance. For
PPP fragmentation, the fragment size of the data packet is selected based
on the maximum size of one VoIP packet. Depending upon the voice codec
used and the number of voice samples per voice payload, the PPP
fragmentation size will vary. Refer to Table 19-16.
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 459
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Table 19-16: PPP fragment size for data packet
Table 19-17 provides the recommended maximum MTU size for different
connection speeds when using IP fragmentation and interleaving:
Table 19-17: Recommended MTU size per connection speed
VoIP Packet Scheduling
It is important that all VoIP packets be queued in a router or switch using a
strict priority scheduler, whereby VoIP packets will be transmitted ahead of
other user applications so the VoIP packets receive the minimum amount of
queuing delay. However, since a strict priority scheduler can block (starve)
the servicing of other traffic queues, one must limit the maximum amount
of bandwidth that the VoIP traffic can consume over the connection. This is
often referred to as rate limiting. With this capability, the VoIP packets in
the strict priority queue are transmitted up to a configured rate, percentage
of interface bandwidth or a certain number of packets (based on the queue
buffer depth) after which, the other user traffic queues are serviced.
Weighted schedulers such as Weighted Round Robin (WRR) or
Weighted Fair Queuing (WFQ) are not recommended for VoIP. If the router
or switch does not support a priority scheduler, then the queue weight for
VoIP traffic should be configured to 100% if the product supports rate
limiting. Otherwise, it should be configured to a high percentage to
minimize the delay and jitter that other traffic can introduce to the VoIP
packets.
Packet Reordering
In some cases there may be multiple paths for a VoIP packet to take when
traveling from its source to its destination. If all VoIP packets do not take
the same path, then packets could arrive out of order. This can cause voice
quality issues, even though packet reordering often has little or no adverse
Voi ce
sampl es
per
G.729
payl oad
Voi ce
payl oad
(10 byt es /
sampl e)
IP/UDP/RTP
Overhead
Voi ce packet
si ze - no header
compressi on
PPP f ragment
si ze f or dat a
packet - no
header
compressi on
2 20 bytes 40 bytes 60 bytes 60 bytes
3 30 bytes 40 bytes 70 bytes 70 bytes
4 40 bytes 40 bytes 80 bytes 80 bytes
Connection Speed (kbps)
56 64 128 256 512
Maximum MTU (bytes) 128 128 256 512 1024
460 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
affect on data application quality since such data packets can be
retransmitted via the TCP protocol.
If two locations connect via two frame relay PVCs, one must ensure that all
VoIP packets for a particular call traverse the same PVC. The routers can
be configured to direct the voice packets from the same source/destination
IP address to traverse the same PVC. Another approach is to configure the
router to send all voice traffic over one of the PVCs.
Chapter 19 Implementing QoS: Achieving Consistent Application Performance 461
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
What you should have learned
IP packets will traverse different Layer 2 networks, each with its unique
QoS mechanisms. In order to provide consistent QoS performance across
the different Layer 2 networks, a network-wide mapping policy must be
implemented. Such a policy describes how the DSCPs for different
DiffServ PHBs map to each Layer 2 QoS mechanisms.
Mapping DiffServ over Ethernet requires mapping the DSCP values to
Ethernet 802.1p values so nonDiffServ-aware network elements can
identify the different classes of traffic over Ethernet. When traversing
frame relay networks, the DSCP values are segregated into packets that can
be discarded and those that cannot during times of network congestion.
Those that are eligible for discard have the frame relay DE set to one;
otherwise, it is set to zero to indicate that the packet should not be
discarded. Similarly, when traversing ATM networks, the DSCPs that
indicate which packets can be discarded under congestion have the ATM
CLP set to one while those that should not be discarded have the ATM CLP
set to 0. MPLS routers supporting E-LSPs use the EXP field information in
the MPLS header to determine the DiffServ PSC of a packet. MPLS routers
supporting L-LSPs use the MPLS LSP label to determine the DiffServ PSC
of a packet.
Over low-speed connections (< 1 Mbps) in a converged network, data
packets should be fragmented and interleaved with the real-time voice
packets to minimize jitter that they could introduce to the voice packets.
PPP class numbers enable the packet fragments to be reassembled into the
proper DiffServ PHB at the receiving end of the PPP connection.
Applications can be categorized based on a common set of QoS
performance criteria. These categories are then used to determine the best
DiffServ QoS mechanisms to apply that best match the application's
performance requirements or to meet end user performance expectations.
Network Service Classes (NSCs) provide network-wide default QoS
policies to simplify QoS provisioning and default mapping between
DiffServ and Layer 2 QoS mechanisms. NSCs can be implemented in any
product to hide implementation differences. Finally, NSCs reduce the
likelihood of provisioning errors and result in consistent QoS policy
implementations across the network.
462 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
References
RFC 2597, J. Heinanen, et. al., Assured Forwarding PHB Group, IETF,
http://www.ietf.org/rfc/rfc2597.txt
ATM Forum AF-TM-0121.000 Version 4.1 Traffic Management
Specification, ftp://ftp.atmforum.com/pub/approved-specs/af-tm-
0121.000.pdf
RFC 2474, K. Nichols, et al., Definition of the Differentiated Services
Field (DiffServ Field) in the IPv4 and IPv6 Headers, IETF, http://
www.ietf.org/rfc/rfc2474.txt
RFC 2475, S. Blake, et. al., An Architecture for Differentiated Services,
IETF, http://www.ietf.org/rfc/rfc2475.txt
RFC 3270, F. Le Faucheur, et. al., Multi-Protocol Label Switching
(MPLS) Support of Differentiated Services, IETF, May 2002, http://
www.ietf.org/rfc/rfc3270.txt
RFC 3246, B. Davie, et. al., An Expedited Forwarding PHB, IETF, http://
www.ietf.org/rfc/rfc3246.txt
IEEE 802.1Q, Virtual Bridged Local Area Networks, http://
standards.ieee.org/getieee802/download/802.1Q-2003.pdf
R. Santitoro, Introduction to Quality of Service (QoS), April 2003, http://
www.nortelnetworks.com/products/02/bstk/switches/bps/collateral/
56058.25_022403.pdf
RFC 1990, K. Sklower, et. al., The PPP Multilink Protocol (MP), IETF,
August 1996, http://www.ietf.org/rfc/rfc1990.txt
RFC 2686, C. Bormann, The Multi-Class Extension to Multi-Link PPP,
IETF, September 1999, http://www.ietf.org/rfc/rfc2686.txt
RFC 1973, W. Simpson, PPP in Frame Relay, IETF, June 1996, http://
www.ietf.org/rfc/rfc1973.txt
FRF.12, Frame Relay Fragmentation Implementation Agreement,
December 1997, http://www.mplsforum.org/frame/Approved/FRF.12/
frf12.pdf
463
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Chapter 20
Achieving QoE: Engineering Network
Performance
Franois Blouin
Figure 20-1: Transport path diagram
Concepts covered
Why network engineering is important
E-Model
Real-time applications engineering & planning process
Hypothetical reference connection
Echo control
Budgeting delay & jitter
Silence suppression considerations
MPLS
A
u
d
i
o
V
o
i
c
e
V
i
d
e
o
codec
RTP
R
T
C
P
R
T
S
P
S
I
P
H
.
3
2
3
H
.
2
4
8
/
M
G
C
P
/
N
C
S
R
e
a
l
-
T
i
m
e
C
o
n
t
r
o
l
UDP / TCP / SCTP
IP
ATM
AAL1/2 AAL5
FR Ethernet
SONET / TDM
Copper WLAN Cellular xDSL
DOCSIS
HFC
Q
o
S
R
e
s
i
l
i
e
n
c
y
Media Related
Session
Control
Gateway
Control
Media Session Related
Fiber
Packet
Cable
View From
Application
Perspective
To
Ntwk
MPLS
A
u
d
i
o
V
o
i
c
e
V
i
d
e
o
codec
RTP
R
T
C
P
R
T
S
P
S
I
P
H
.
3
2
3
H
.
2
4
8
/
M
G
C
P
/
N
C
S
R
e
a
l
-
T
i
m
e
C
o
n
t
r
o
l
UDP / TCP / SCTP
IP
ATM
AAL1/2 AAL5
FR Ethernet
SONET / TDM
Copper WLAN Cellular xDSL
DOCSIS
HFC
Q
o
S
R
e
s
i
l
i
e
n
c
y
Media Related
Session
Control
Gateway
Control
Media Session Related
Fiber
Packet
Cable
View From
Application
Perspective
To
Ntwk
464 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Router schedulers and buffer sizing
Engineering for a traffic profile
Introduction
Historically, TDM voice networks were engineered to ITU-T quality
standards; that is, the end-to-end TDM impairment budget was well
understood, and each operator's network was allowed a distinct portion of
that budget. Element requirements are well-defined and measurable. Where
packet voice standards exist, they are rudimentary and incomplete, thus
requiring further development regarding the needs of adequate packet
based applications planning and engineering.
Packet networks are being deployed as replacements for TDM for
switching voice and multimedia traffic. Packet transmission changes the
impairment budget and network design has to consider additional
impairment, such as delay, distortion, and jitter. The additional packet
network impairments need to be characterized and modeled to accurately
predict the application performance and define workable operating
margins.
Some network impairments are unavoidable: propagation delay (physics),
packetization delay, and legacy equipment. However, many can be
engineered to achieve predictable, acceptable voice quality through careful
control of the remaining impairment margin. Network planning and
engineering provide guidance on the correct choices for each parameter
including optimal packet size, jitter, total end-to-end delay, loss plan, echo
control, choice of codec, link speed, buffer dimensioning and so on.
Engineering is complicated by evolutionary migration to packet networks,
which creates islands of packet transmission in the global multioperator
TDM network. The large number of operators offering voice services and
the lack of packet interface standards result in the use of TDM to patch
together different packet domains. Each conversion between TDM hop and
packet networks adds significant impairment to the connection. Network
planning should be done to avoid TDM hops between packet islands. In
this chapter, potential issues related to real-time voice over packet and data
applications will be described along with mitigations and best practice
engineering guidelines.
QoE Engineering Methodology
The end users of network services do not care how service quality is
achieved. What matters to them is how easily they can complete their tasks
and achieve their goals, that is, generally, their Quality of Experience
(QoE). Carriers and transport providers, on the other hand, are very
concerned with defining which QoS mechanisms they should use, traffic
Chapter 20 Achieving QoE: Engineering Network Performance 465
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
engineering requirements and how to implement, optimize and configure
the network while minimizing cost and maximizing and link utilization.
This section describes in more detail the method and process for
engineering a network to deliver acceptable end-user QoE, also referred as
QoE engineering. There is a distinction between QoE engineering and
traffic engineering. Traffic engineering is the prevailing technique for
mapping traffic flows to ensure optimally-utilized bandwidth and prevent
congestion build up. Traffic engineering is one of the necessary steps of
QoE engineering, but it is not sufficient. To meet specific service quality
targets and user QoE, traffic engineering needs to be supplemented by
additional considerations highlighted in the QoE engineering process. In
order to deliver acceptable service quality and even differentiated services,
QoE should be part of the engineering methodology process; hence, a top-
down approach is proposed as an effective technique to deliver enhanced
customer value while performing network engineering. A four-step QoE
engineering process is shown in Figure 20-2 and will be discussed in the
remaining sections of this chapter for both voice and data services.
Figure 20-2: QoE Engineering Methodology
This method is essentially a consolidation of fundamental elements of an
end-to-end system in which the interrelationship of QoS-QoE and Traffic
Engineering (TE) is defined based upon a top-down approach starting at
the end-user level. The objective is to facilitate the selection of effective
QoS mechanisms to satisfy a given end-user application QoE.
QoE Engineering Methodology
1- Define Service QoE Performance Metrics & Targets
Meet QoE
Targets
NO YES
Validated => Service
QoE requirements are
satisfied by the QoS
enabled solution
2- Identify QoE Contributing Factors & Dependencies
3- Determine HRXs, QoS Mechanisms Requirements
Define service level guarantees (best effort, relative, soft, hard
N/W transformation phases, call scenarios, HRX topologies
Echo canceller placement & transmission planning
Nodal level: scheduler discipline, policing, queue management
End-to-end: admission control, bandwidth reservation
Run Simulation and Analyze Network & QoE Behaviour
Top/Down
Approach
User
(QoE)
Space
Network
Architecture
(QoS) space
Impairments - Delay, loss, jitter
Application decomposition
4- Traffic Engineering & Resource Allocation
Determine traffic demands, distribution & bottleneck links
Budget allocation: delay, loss, jitter
Router resource buffer dimensioning & scheduler share
BW Provisioning static vs. dynamic/on demand
Define routing constraints and path layout
Client/server interaction
Flow type and duration
466 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Voice QoE Engineering
In this section, we review the first two steps described in the above QoE
engineering process model; that is, identifying the QoE performance
metrics and dependencies.
Voice QoE performance metrics & targets
The first step is to determine the metrics used for service quality
performance targets. Once the metrics have been defined, the associated
QoE targets should be determined to achieve the desired voice quality
service level. QoE targets can be based upon absolute threshold or relative
to a known user experience. The latter one is the one preferred for voice
services since end-users have long time experience with telephone systems.
Please note that defining targets is not a trivial task and requires the
appropriate subjective evaluation expertise along with human factors
behavior knowledge. An approach based on known user experience implies
that there might be multiple targets, and no single target is applicable to all
situations. For example, mobile users have different expectations than
wireline users. Similarly, people making international/overseas calls have
different expectations than on local calls. Therefore, there might be
multiple targets required depending on the call scenarios supported by the
network. It has been determined that a difference of 3R
1
is not noticeable
by typical users and, therefore, packet network could be engineered within
this margin in order to provide an equivalent replacement technology.
Difference of 3-7R might be noticeable, but most likely acceptable. Larger
R degradations (greater than 7R) are more likely to be noticeable.
Table 20-1: Voice QoE metrics and targets.
1. R : Model transmission rating. R is the predicted output quality index of the E-model (ITU-G.107). See
Chapter 3 Voice Quality for details.
R-factor R PSTN - Packet < 3R
delay < 150 ms
distortion Ie < 3R
Frequent Interruption 80ms (affects speech
intelligibility)
Infrequent interruption 3 sec (perceived as call
drop)
Services
User QoE Performance Targets
QoE metrics QoE targets
Path Interruptions Due to Failure
Conversational Voice
Conversational
voice
(CBR and VBR)
Chapter 20 Achieving QoE: Engineering Network Performance 467
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Voice QoE contributing factors & dependencies
To understand the behavior of QoS mechanisms and their relationship on
QoE, it is important to determine the factors and dependencies that affect
the QoE metric and select the QoS mechanisms that control most
efficiently that factor. This section summarizes QoE metric dependencies
for conversational voice.
Delay, speech codec, packet loss, echo
Four factors have been identified to add impairment to packet networks
beyond that associated with switched-circuit networks and have been
discussed in Chapter 3. These include delay (including delay variation or
jitter), the speech codec, cell/packet loss, and echo. Echo control remains a
main area of concern because the increased delay across the network may
occasionally defeat the loss plan, which controls echo in the local network.
Figure 20-3: Summary of the voice QoE impairments and the impact on the
E-Model R
Echo Control and TELR
TELR is a measure of the level of echo signal reflected back to the talker
and is expressed in form of signal loss (or attenuation) going from the
talker all the way to the reflection point and back again to the talker (echo
path).
User
Satisfaction
category
Increasing
Distortion
Increasing
Delay
100
90
70
80
60
50
Very
Satisfied
Satisfied
Some users
dissatisfied
Many users
dissatisfied
Nearly all users
dissatisfied
R
0 100 200 300 ms
1. Speech Coding
Introduced by codec and potential
transcoding
M
G
9
0
0
0
M
G
4
0
0
0
M
G
4
0
0
0
PBX
End Office
Echo
Path
Talker
Echo
Loss
Pad
2-Wire
Loop Loss
(each
direction)
THL
Hybrid
CODEC
A
D
A
D
4-Wire
M
G
9
0
0
0
M
G
4
0
0
0
M
G
4
0
0
0
M
G
4
0
0
0
M
G
4
0
0
0
PBX
End Office
Echo
Path
Talker
Echo
Loss
Pad
2-Wire
Loop Loss
(each
direction)
THL
Hybrid
CODEC
A
D
A
D
4-Wire
2. Talker Echo
3. Late/Lost Packets
Packet loss is caused by buffer
overflow within the network
In congestion situations, packets
are lost and jitter is uncontrolled,
leading to late packets
Echo when delay > 20 msec
4a. Queuing & Jitter
Large buffer size
will add to overall
delay
4b. Network delay
4c. Packetization Delay
Includes intrinsic speech
sample accumulation
delay + DSP processing
overhead
Delays
Distortion
Includes propagation due to
distance and serialization
delay on l ow speed link
MG
LER LER LER LER
MG MG MG
PSTN
468 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Eq. 1 TELR = SLR + EL + RLR
where TELR = Talker Echo Loudness Rating
SLR = Send Loudness Rating
RLR = Receive Loudness Rating
EL = Echo Loss
Echo loss is the sum of all the losses in the echo path, including the
Transhybrid Loss (THL), loss pads, and terminal coupling loss (TCL
w
),
exclusive of the SLR and RLR.
Example 1. TELR calculation with analog terminals is shown in
Figure 20-4. The TELR at Side A is determined as the following:
TELR
A
= SLR
A
+ EL
A
+ RLR
A
= SLR
A
+(Loss Pad
B
+ THL
B
+ Loss Pad
A
)+RLR
A
= 11 dB + 6 dB + 14 dB + 6 dB 3 dB
= 34 dB
Figure 20-4: Example of TELR calculation with analog terminals
In this example, the analog terminal loudness ratings (SLR = +11 dB and
RLR = 3 dB) include the loss incurred in a nominal loop length of 2.7 km.
These values are based on updated information provided in the draft
version of TIA/EIA-470-C, Performance and Compatibility Requirements
for Telephone Sets with Loop Signaling. High TELR provides voice
connection with some immunity from degradation with increasing delay.
Calls with lower TELR show impairment at shorter delay. The various
curves in Figure 20-5 show the relationship of R with delay for different
values of TELR (based on nominal loudness values for a digital terminal).
These curves were determined using the ITU E-Model. The E-Model is a
network planning tool that predicts user perception of conversation quality
for a voice connection. For any given value of TELR (holding other
parameters, including terminal loudness, constant), the graph shows how
Loss Pad
A
6 dB
Loss Pad
B
6 dB
THL
B
14 dB
SLR = +11 dB
RLR = -3 dB SLR = +11 dB
Side A
RLR = -3 dB
Side B
Hybrid
Hybrid
PSTN
Side A Echo Path
SLR
A
= 11 dB
RLR
A
= -3 dB
EL
A
B-Side A-Side
2-Wire 2-Wire 4-Wire
Chapter 20 Achieving QoE: Engineering Network Performance 469
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
user satisfaction decreases with increasing delay. From the perspective of
constant delay, the conversation quality of the channel goes down as TELR
decreases (the amplitude of the echo increases). In Figure 20-5, this can be
observed by looking at a specific delay, say 100 ms, on the horizontal axis,
and observing that curves with lower TELR cross the vertical line,
representing that delay at lower and lower R. The black curve in Figure 20-
5 represents the performance of an all-digital connection having the best
possible TELR (65 dB).
Figure 20-5:Impact of Echo on user perceived voice quality
The first line of defense against echo is the insertion of loss in the path.,
The introduction of loss reduces the level of echo returned to the talker
(that is, increases the TELR). Loss insertion is of the most benefit in
connections that have two-wire terminations at both ends because the echo
path contains loss in both directions. This constitutes an improvement from
the talkers echo point of view, but it is at the expense of OLR because the
nonecho signal is also attenuated. There is a limit as to how much loss can
be introduced before conversation becomes noticeably impaired by the fact
that it is hard to hear the person at the other end of the connection. This
limit is typically in the range of 57 dB and is governed by OLR
requirements (such as ITU-T Rec. G.111, Ref.15) and terminal loudness
requirements (such as ITU-T P.310, Ref. 16). The maximum permissible
network loss that can be inserted in the connection for the purpose of echo
control will be the difference between the maximum (quietest) OLR limit
50
60
70
80
90
100
0 100 200 300 400 500
One-way Delay, T (ms)
R
TELR = 69
TELR = 64
TELR = 59
TELR = 54
TELR = 49
TELR = 44
TELR = 39
TELR = 34
TELR = 29
Nearly all users
dissatisf ied
Very
satisfied
Satisf ied
Some users
dissatisfied
Many users
dissatisfied
User Sat isfact ion
Decr easing TELR
Analog connection which employs
ECANs (ERL = 55 dB) for echo control
Analog connection which employs only
6 dB loss pads f or echo control
470 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
and the maximum OLR when no other circuit losses are present
(dependent on the quietest SLR and RLR terminal ratings).
The Impact of Echo on user perceived voice quality as shown in Figure 20-
5 is expressed as Transmission Rating R, computed from by the E-Model
as a function of delay and TELR. Low TELR connections are much more
sensitive to delay than high TELR connections.
HRXs and QoS Mechanism Requirements
The purpose of this section is to define a technique to facilitate the
performance analysis of complex network architecture by simplifying it to
simpler single connection path, referred to as Hypothetical Reference
Connection (HRX).
This section will provide guidance to facilitate the selection of effective
QoS mechanisms and to understand their limitations.
HRXs
It has been common practice for carriers to use HRXs to determine budget
allocation standards for nodal roles within their networks, such as
transmission, switching and access delays, loudness ratings, echo signal
path performance and quantization distortion. The connection is
hypothetical in the sense that it takes a sensible stab at the distances and
number and type of equipment involved, usually lumping together common
parts, rather than being a model of a real network connection. HRXs can be
use to determine end-to-end performance and budgets for packet network
artifacts such as codec type, packet size, jitter, packet loss and to allocate
budget to each network component. The use of modeling requires some
concrete scenarios as input to the modeling process. To fully assess the
performance of a network, call scenarios should include both the most
likely call scenarios and some extreme scenarios. Important aspects include
terminal types (analog, digital, cellular), access technologies (POTS,
wireless, broadband DSL/WLAN), the distance (local, long-distance and
international), and the call types (line-to-line, remote offices, PBX,
operator assisted and three way calling). Once the call scenarios are
selected, then the underlying network topologies and detailed HRXs need
to be determined along with the Echo Canceller (ECAN) locations.
Example 2. HRX impairment decomposition for a PBX-to-POTS
long-distance call is shown in Figure 20-6. The impairments are
classified under delay, distortion, sound level and echo. The delay
components should include all delays across the path including
processing, queuing/jitter, propagation and transmission. Similarly,
the distortion components should include packet loss, speech codec,
and transcoding. This process should be executed on all potential
call scenariossuch as three way calling, operator-assisted calls,
voice mail, remote offices, and international callsto identify the
Chapter 20 Achieving QoE: Engineering Network Performance 471
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
most sensitive scenarios. At this stage, it is important to determine
all the fixed delays accurately (processing/propagation/
serialization); the variable/queuing delay will be further analyzed in
the traffic engineering step.
In the process of defining the HRXs, it is important to identify the weakest
link, or the bottleneck link, where potential congestion could occur. The
bottleneck link is typically the largest contributor to performance
degradation; therefore, the topology can be simplified by a simple link
rather than a complex network of switches, routers and LAN elements as
shown in Example 3.
Figure 20-6: PBX-to-POTS Call Scenario and associated HRX impairments
breakdown
Example 3. Figure 20-7 shows an Enterprise WAN access example,
whereby the Enterprise LAN client sites are connected to the
content provider by a service providers core IP network. In order to
simplify the analysis, the complex network topology can be
reduced to a much simpler HRX while providing very accurate
performance prediction since the largest impairments coming form
the bottleneck link and the corresponding WAN router nodes.
MG
900
0 MG
9K
E
C
IXC Tandem Offi ce
PBX
MG
4K
E
C
MG
4K
MG
4K
Router
Router
Router
Router
Router
Router
MG
900
0 MG
9K
E
C
IXC Tandem Offi ce
PBX PBX
MG
4K
E
C
MG
4K
MG
4K
Router Router
Router Router
Router Router
Router Router
Router Router
Router Router
472 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 20-7: Enterprise WAN Access topology
Enterprise WAN access HRX derived form a complex topology. This HRX
will be used throughout this chapter to illustrate the various QoS
mechanisms under different operating conditions. Top graph topology has
been reduced to a single connection path to simplify the analysis. In this
case, the main area of interest is to assess the performance of some QoS
mechanisms implemented in the edge WAN router nodes; therefore, it is
assumed that the service provider core and the Enterprise LANs are well
provisioned hence not adding any significant impairments.
QoS mechanisms selection
The selection of QoS mechanisms varies as a function of traffic type,
transport protocol, as well as the service quality level and user QoE
Chapter 20 Achieving QoE: Engineering Network Performance 473
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
requirements. In general, QoS mechanisms can be classified into two
broad categories:
Nodal/distributed
End-to-end/centralized
Nodal QoS mechanisms are typically operating solely at the nodal level;
hence, they are performing QoS treatment based upon single node
characteristics without considering other nodes status or characteristics.
Typical nodal IP QoS mechanisms are shown on Figure 20-8. They include
traffic classification, bandwidth management, traffic conditioning, queue
management and scheduler.
Figure 20-8: Typical IP QoS mechanisms & operation
A typical network would include one or more of these functions depending
on the service level guarantees requirements, traffic type and end-user QoE
requirements. DiffServ is a typical example of nodal/per hop behavior
architecture. Nodal QoS mechanisms are obviously less complex as they
dont need to built-up knowledge of an entire path. However, they offer
performance and efficiency limitations in delivering hard QoE
2
, but this
still might be acceptable depending on the business model and SLA targets.
Optimizing the wrong measures may achieve certain local objectives but
may have disastrous consequences on the emergent properties of the
network and thereby on the quality of service perceived by end-users of
network services.
The other class of QoS mechanisms, end-to-end/centralized QoS
mechanisms, operates on multiple nodes to provide QoS treatment on an
end-to-end basis. Obviously, end-to-end treatment is more complex but
2. Hard QoE means the service quality is guaranteed all the time under any conditions, any load and traffic
patterns absolute limit.
Traffic
Classification
Queue
Management
Scheduler
Perform metering,
policing, shaping
and/or re-marking to
ensure that the traffic
entering the domain
conforms to the rules
specified in the SLA.
Classify and place
incoming
datagrams in
appropriate
queues.
Discard packets
effectively based
upon application
requirements
(RED/WRED,
ECN)
Perform datagram
scheduling by
using a specified
queuing scheme
(FIFO, WFQ, PQ)
Bandwidth
Management
& Traffic Conditioning
Incoming
Flows
Outgoing
Flows
S
c
h
e
d
u
l
e
r
Q
u
e
u
e
x
Queue
Management
Marker
Shaper/
Dropper
Meter/
Policer
Drop/
Re-mark?
BW Management & Traffic Conditioning
Classifier
Incoming
Flows
Outgoing
Flows
S
c
h
e
d
u
l
e
r
Q
u
e
u
e
x
Queue
Management
Marker
Shaper/
Dropper
Meter/
Policer
Drop/
Re-mark?
BW Management & Traffic Conditioning
Classifier
474 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
may provide some performance advantage, especially better service level
guarantees. End-to-end QoS mechanisms would typically include voice
Call Admission Control methods (CAC), TCP flow-based admission
control and/or any centralized coordination methods to control traffic flows
admission and bandwidth reservation. RSVP would be an example of an
end-to-end signaling protocol for bandwidth reservation. However, RSVP
has not been widely deployed because of its scalability problem. The
premise of centralized/end-to-end QoS mechanisms is to ensure sufficient
network resources are available to carry delay, loss, jitter for real-time
sensitive information traffic, to meet SLA targets as well as protecting QoE
guarantees made to already admitted flows. A centralized QoS architecture
will be more capable of providing superior service level guarantees than
nodal QoS mechanisms.
In a typical centralized admission control architecture, a flow declares its
needs and constraints (delay, loss, jitter), as well as the traffic
characteristics it will send into network. The centralized coordinator node
will accept or deny new traffic flows based on availability of network
capacity; it may block call (for example, busy signal) if it cannot meet
needs. These types of mechanisms are far more complex, and some of them
are still under development and/or early stage of deployment. Call servers
and media gateways are now supporting some form of CAC, whereby the
MGs will admit a limited number of calls on a given channel or link size.
So far, end-to-end QoS mechanisms have been principally used for voice
traffic; however, the trend moving forward is on flow-based TCP
admission control for data traffic. WLAN standard 802.11a/b offers a
centralized coordinator mode of operation called Point Coordination
Function (PCF), whereby access to the medium is granted through a
polling scheme. The newly developed 802.11e, which is not yet ratified,
will implement similar type of centralized coordination mode but also
implement bandwidth reservation mechanisms. Figure 20-9 shows an
example where end-to-end QoS engineering could also be applied to
reduce the level of transcoding stage on an intercarrier voice call.
Figure 20-9: Inter-Carrier transcoder optimized end-end calls
G.711
TDM
IP network connectivity allows optimized call path
CPE CPE CPE CPE
G.711
TDM
AMR/SMV
VoIP
G.726
VoIP
G.729a
VoIP
MSC
/MTX
MCS MCS
SIP/SIP-T SIP/SIP-T
WG WG PVG PVG PVG PVG PVG PVG MP MP
CS/SS CS/SS
Chapter 20 Achieving QoE: Engineering Network Performance 475
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Service level guarantees and QoS mechanisms selection
In the previous section, we saw that the selection of QoS mechanisms do
not necessarily lead to a system delivering consistent and continuous QoE
as the operating condition changes. An aspect that also needs to be
considered as part of the QoS strategy is the service level guarantees. The
service level guarantees, referred to here as the system consistency in
delivery, the desired QoE. For instance, if a given QoS mechanisms only
works under moderate load during off peak hours, then the service level
guarantees would be called soft. In contrast, if the QoS architecture is
capable of providing low delay, and jitter and loss under extreme operating
conditions, the service level could be considered hard QoE. Not all QoS
mechanisms are capable of delivering hard QoE guarantees, while many
can provide soft QoE. Some backbone path networks exhibit low delay
and loss; hence, are very suitable for VoIP. However, there is no
consistency across all paths, and many backbone paths exhibit undesirable
characteristicssuch as large delay spikes, periodic delay patterns and
outagesthat may preclude hard QoE delivery unless adequate
provisioning and efficient traffic engineering is in place. The average
packet delay reported in all studies is predominately dominated by light
speed propagation delay. However, jitter and loss rate are highly dependent
on the time of the day, the provider and the path. The factors that contribute
to the path characteristics variations are as follows:
Equal-cost Multipath (ECMP) and network routing
Packet size distribution
Link utilization & traffic load
Anomalous router behavior routers performing other functions
(that is, table updates)
A North American Tier 1 telecom service provider has reported that 99.9%
of the time packets traverse their network coast-to-coast in less than 31 ms.
This means that propagation delay is predominant. However, about 0.1%
of the time, packets experience delays that are longer (sometimes much
longer) than this. So the question is what service level guarantees should
we offer? And what fraction of the time of the service level should be
guaranteed? There are no standard guidelines for defining the service level
guarantees. It is usually defined on a case-by-case basis and depends on
each customers SLA. Table 20-2 highlights the QoS mechanisms, along
with their capabilities in delivering different service level guarantees.
476 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Table 20-2: Service Level guarantees vs. QoS mechanisms
In the preceding table, a check mark (-) indicates that a QoS mechanism is
required for a given service level guarantee. Overprovisioning is obviously
an alternative to QoS mechanisms and could still be a viable solution,
depending on the business model, cost, complexity and efficiency
constraints.
Traffic Engineering & Resources Allocation
The last step in our QoE engineering process is Traffic Engineering (TE)
and resources allocation. TE is a technique for mapping of traffic demands
onto the network infrastructure to achieve specific performance objectives,
such as service quality, efficiency, cost, availability etc. The different
elements of traffic engineering are highlighted in Figure 20-10 and will be
covered in this section.
Hard QoE Soft QoE
Traffic classification
Traffic buffering
Traffic scheduling
Rate limiting and policing
Active Queue Management optional
Time deterministic & time slot channel
allocation
BW reservation/Dynamic provisionning
Centralized admission control methods
Centralized coordinator node
Over-provisioning
T
r
a
f
f
i
c
E
n
g
i
n
e
e
r
i
n
g
/
Q
o
S
M
e
c
h
a
n
i
s
m
s
S
t
r
a
t
e
g
i
e
s
C
e
n
t
r
a
l
i
z
e
d
E
2
E
Q
o
S
/
T
E
M
e
c
h
a
n
i
s
m
s
QoE Level
any
Traffic Engineering/QoS Mechanisms
Alternative to above
N
o
d
a
l
/
d
i
s
t
r
i
b
u
t
e
d
Q
o
S
M
e
c
h
a
n
i
s
m
s
Chapter 20 Achieving QoE: Engineering Network Performance 477
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Figure 20-10: Traffic engineering examples
Traffic demands
One of the first steps in traffic engineering is to identify traffic demands
and traffic sources in terms of distribution, characteristics and aggregate
volume. Traffic demands can be classified in at least two categories:
Constant Bit Rate (CBR) and Variable Bit Rate (VBR). Once traffic
demands have been established, network resources (buffer, scheduler share,
link size) can be allocated. The following sections will describe voice,
video and data traffic source demands.
CBR traffic sources
CBR traffic demands are primarily driven by the applications (for example,
voice, video, FAX, or modem), codec type, payload and protocol overhead.
For CBR traffic sources, the peak rate equals the mean rate. Therefore,
based upon these parameters, it is possible to calculate the amount of
capacity and/or link size to transport a finite number of sources or sessions.
In general, CBR sources are easier to provision as their individual and
aggregate characteristics are well defined and relatively easy to calculate.
The traffic demand examples for CBR sources for various codec and
payload type are shown on Table 20-3. The aggregate is calculated by
multiplying the single source traffic volume x the number of expected
source.
478 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Table 20-3: CBR voice bandwidth requirements including protocol overhead
Example 4. In order to estimate the number of voice calls
supported on an OC-3 link, the voice call traffic demands need to be
estimated along with the effective link capacity. For a G.711-20 ms
VoIP/AAL-5, the effective throughput requirements are 106 kbits/s
per call (see Table 20-3). The effective OC-3 Link capacity is
calculated by subtracting the SONET overhead and a 5% call
control/signaling resulting in 142 Mbits/s. Therefore, the maximum
number of voice calls is 142 Mbits/s divided by 106 Kbits/s = 1339.
In comparison to a traditional TDM infrastructure capable of 2016
voice calls, the packet solution using CBR codec is not as effective
in terms of call throughput due to the packet infrastructure
overhead.
Figure 20-11: Traffic demands calculation example
VoATM
G.711
over AAL-
1 (6ms)
G.711/IP
over AAL-5
(10ms)
G.711/IP
over AAL-5
(20ms)
G.711/IP
over AAL-5
(30ms)
G.711/IP
over ppp
(10ms)
G.711/IP
over ppp
(20ms)
G.711/IP
over ppp
(30ms)
codec bit rate (kb/s) 64 64 64 64 64 64 64
speech frame length (ms) 6 10 20 30 10 20 30
voice payload (bytes) 47 80 160 240 80 160 240
voice packet per second 171 100 50 33 100 50 33
RTP header 12 12 12 12 12 12
UDP header 8 8 8 8 8 8
IP header 20 20 20 20 20 20
ATM AAL-5 CPCS header 8 8 8
ppp 7 7 7
payload+protocol overhead 47 128 208 288 127 207 287
ATM AAL-1 SAR header 1
ATM header 5 5 5 5 5 5 5
# of cell per voice packet 1 3 5 6 1 1 1
cell padding 0 26 52 25
% Overhead 11% 50% 40% 25% 37% 23% 16%
Cell rate (cell/sec) 171 300 250 200 100 50 33
Effective Bit Rate (kb/s) 72.5 127.2 106.0 84.8 101.6 82.8 76.5
VoIP/ATM-AAL-5 VoIP/ppp
Max num of voice band sessions/channels on OC-3 =
24 x 28 x 3 = 2016 DSOs 2016 x 64Kbits/s = 129 Mbits/s
North America TDM rate structure
n DS-1s OC-3c
TDM
Packet
network
Packet
Circuit
To Packet
Mux
24 DSOs
DSO
Mux
28 DS1s
DS1
Mux
3 DS3s
DS3
OC3
OC3c rate SONET overhead 5% traffic overhead for
signalling and control = 155-6-7 = 142 Mbits/s
TDM
network
Chapter 20 Achieving QoE: Engineering Network Performance 479
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
VBR traffic sources
In contrast to CBR sources, the variable rate of traffic from VBR sources
makes VBR more complex to provision. For example, a voice source with
silence suppression enabled may generate traffic ranging from a few up to
100 kb/s, while an MPEG-2 video source may vary from 39.8 Mb/s. Table
20-4 shows typical video codec traffic demands. Other VBR traffic
includes data sources, which are more bursty than VBR voice and video.
Traffic engineering for VBR sources cannot be done using the three
formulas, but instead require more sophisticated analytic equations or must
be done empirically, using simulation.
Table 20-4: Traffic profiles for video sources
Voice traffic sources with silence suppression
Silence Suppression (SS) is one of the solutions used to reduced bandwidth
and increase the call throughput in packet networks. It relies on a Voice
Activity Detection (VAD) mechanism for each speaker in a voice call.
When silence suppression is used, talkspurts and silence periods are
Application Resolution
(Typical)
Bit Rate
(Bits/s)
Codec
Video
conference
N/A 384768 kb/s
CBR/VBR
H.261, H.263,
DivX
Internet Video
streaming
320 x 240 56300 kb/s
CBR
DivX, MPEG-4,
RealVideo,
Sorenson,
Windows Media
Broadcast TV 640 x 576 (PAL
& SECAM)
640 x 480
(NTSC)
35 Mb/s CBR/
VBR
MPEG-2
1.52.5 Mb/s
CBR/VBR
MPEG-4 AVC,
SMPTE VC-1
Video on
demand
528 x 480 /
480 x 480 /
352 x 480
3.75 CBR MPEG-2
DVD 720 x 480 39 Mb/s VBR MPEG-2
HDTV 1280 x 720 p
1920 x1080i
1219Mb/s
VBR
MPEG-2
69 Mb/s VBR/
CBR
MPEG-4 AVC,
SMPTE VC-1
480 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
statistically distributed, resulting in a corresponding distribution of
instantaneous loading at the statistical multiplexer (buffer). The talkspurt
and silence period distributions together define the burstiness of the
incoming voice traffic source into the statistical multiplexer. When silence
suppression is used, voice traffic sources are modeled as VBR sources and
there is the probability that a sufficient number of these sources are active
simultaneously. This may cause the buffer to overflow with subsequent
packet loss. The large variation of traffic load from this bursty behavior
of silence-suppression-enabled sources requires careful capacity planning
and buffer size dimensioning. When silence suppression is used, the ratio
of peak/mean (burstiness) traffic demands varies depending on the number
of sources and the voice activity level. As the number of sources increase,
the peak/average ratio is reduced from 3:1 down to 1.5:1 with 64 sources
and further to 1.25:1 for 250 sources using the traditional talker model as
shown on Figure 20-12. Below 24 users, the aggregate is highly bursty and
exceeds the capacity needed for CBR sources; therefore, silence
suppression is not an effective approach when the number of sources is
small (less than 24). As the number of sources increase, the aggregate
traffic demands become less bursty and more predictable hence
advantageous for silence suppression (VBR) sources.
Figure 20-12: Comparison of CBR and VBR voice traffic demand per call for
G.729/AAL-2 codec with/without silence suppression
Comparison of AAL-2 Voice Traffic with/without Silence
Suppression
G.729 10ms, CU=5ms
42.1
12.3
11.8 11.8 11.8 11.8 11.8
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
40.0
45.0
50.0
1 4 24 32 48 64 256
Number of source
L
i
n
k
B
W
p
e
r
C
a
l
l
i
n
K
B
i
t
/
s
G.729 10ms AVG BW per Call (CU=5ms)
G.729 with VAD 10ms AVG BW per Call (CU=5ms)
From 1 to 24 sources, the use of
silence suppression creates a
significant amount of bandwidth
variation due to the ON/OFF
activity of voice activated
Average
link BW
Peak
Link BW
With 24 sources or more, there is is
significant bandwidth saving using
silence suppression
No Silence Supression
Witn Silence Supression
Chapter 20 Achieving QoE: Engineering Network Performance 481
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
The columns in Figure 20-12 indicate the average traffic, while the thin
bars indicate the peak bandwidth generated per voice call for the silence
suppression-enabled VBR sources. This graphs highlights the fact that
when only a few silence suppression-enabled sources are multiplexed, the
peak traffic exceeds the CBR sources; therefore, it is not an effective
solution. (Talker model assumes average talkspurt of 0.352 sec, silence gap
of 0.650 sec).
One of the most important elements affecting silence suppression-enabled
voice sources is the voice activity level. Bandwidth required to support
voice calls with silence suppression depends primarily on the voice activity
level; that is, the ratio of talkspurt/(talkspurt + silence) and the mix of voice
calls and voiceband data. The differences in talkspurt/silence distribution
types of several talker models reported in the literature are not exhibiting
significant differences in terms of voice traffic profiles generated at the
aggregate level, and thus are less critical than the effective voice activity
level. The voice activity level varies depending on the talker model used
and the type of service. It generally varies from 35% to 55% for
conversational voice and up to 88% for scripted speech. No single talker
model fits all voice-based applications and services. The voice activity
level assumption is key to effective silence suppression engineering and
design, but it is not fully understood; further characterization is required.
At this time, the 55% activity level is a conservative value for engineering.
Figure 20-13 shows the maximum number of voiceband sessions [1] for
North American TDM and SONET-based capacity on an OC-3 link and
upon the Central Limit Theorem (CLT). This engineering graph can serve
for capacity planning. A reference line has been set to 2016, which
indicates the number of voiceband sessions carried over a TDM
infrastructure; therefore, all the points above it indicate the operating
conditions where silence suppression would offer equivalent or superior
capacity.
482 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 20-13: Maximum number of voiceband sessions [1] for North America
TDM and SONET-based OC-3 capacity as a function of the
voice activity level
Data Traffic demands
Data traffic is another kind of VBR traffic (sometimes less predictable than
VBR voice). TCP flow control, operating at Layer 4, is used for traffic
throttling up and down based upon implicit congestion notification. Despite
numerous attempts to define a typical user profile, it has been difficult to
nail down a suitable average profile. A set of typical individual user traffic
source models was created based on published papers and Nortel internal
studies. Data traffic flows were classified in three distinct groups: long,
short, and super-short, to represent typical real-time data user applications,
such as HTTP/web browsing, telnet and gaming. Examples of data traffic
demands are shown in Table 20-5. The inter-arrival distribution implies that
not every user would be active at the same time, and includes time between
requests.
[1]: Voice band sessions includes voice calls & voice band data
142 Mbits/s is derived from OC-3 SONET overhead 5% BW for call control/signaling
129 Mbits/s is derived from 2016 channelized DS0s at 64Kbits/s each
Voi ce Band Sessi ons and Voi ce Acti vi ty Level wi th Increasi ng Voi ce Band Data Contri buti on
1400
1500
1600
1700
1800
1900
2000
2100
2200
2300
2400
2500
2600
2700
2800
0.45 0.50 0.55 0.60
VAD Level
M
a
x
i
m
u
m
V
o
i
c
e
B
a
n
d
S
e
s
s
i
o
n
s
100% Voice - 142Mbps
80% Voice - 142Mbps
60% Voice - 142Mbps
40% Voice - 142Mbps
2016
100% Voice - 129Mbps
80% Voice - 129Mbps
60% Voice - 129Mbps
40% Voice - 129MBPS
Chapter 20 Achieving QoE: Engineering Network Performance 483
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Table 20-5: Typical data application traffic demands based upon industry
published papers and internal Nortel research
Figure 20-14 shows the aggregate data user traffic demand requirement for
various QoE level. As expected, the bandwidth requirement varies as a
function of the number of users as well as with the level of QoE provided.
This engineering table has been derived from network simulation based
upon a predetermined data user traffic profile (see Table 20-5) and, as one
of many typical traffic profiles, is provided as a guideline only. There is no
one size fits all traffic profile that would suit all Enterprise type
businesses and customers. So it is expected that some characterization
would be required to derive accurate traffic demand matrixes for specific
user profile. In general, it was found that the average individual data user
traffic would vary 20 kb/s up to 130 kb/s, depending on the number of
sources, aggregation level and QoE targets. The level of aggregation is
highly beneficial due to the statistical multiplexing. For example, for a ten-
user network, it would require about 130 kb/s for each user to deliver
optimal QoE, while it would only need 32 kb/s for a 2000 user network.
Example 5. Determine the bandwidth requirement for an
Enterprise WAN access link size requirements for 100 users. A 100
user network would require 6.3 Mbits/s link bandwidth to deliver
optimal QoE, while it would require about half (2.8 Mbits/s) to
provide acceptable QoE.
Traffic Flow
Type
Transaction Size &
distribution
Inter-arrival
Distribution
main object size mean: 32 sec
mean: 10Kbytes stdev: 92 sec
stdev: 25Kbytes Distribution: Weibull
Distribution: Lognormal
inline object size
mean: 7.7Kbytes
stdev: 125Kbytes
Distribution: Lognormal
number of inline object
min: 1, max 10, mean 5
Distribution: uniform
mean: 40bytes mean: 300 sec
stdev: 20bytes stdev: 150 sec
Distribution: Normal Distribution: Weibull
mean: 80bytes mean: 0.06 sec
Distribution: Distribution:
Extreme (a=80, b=5.7)
Deterministic
W
e
b
-
b
r
o
w
s
i
n
g
,
e
-
c
o
m
m
e
r
c
e
T
e
l
n
e
t
,
l
o
g
i
n
G
a
m
i
n
g
S-Short Flows
Short Flows
484 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 20-14: Data user traffic demand requirements for various QoE level
Figure 20-14 is based upon Table 20-5 user traffic profiles, including TCP
short and long flows. Long flow traffic represents 80% of all traffic
volume. Heavy Peer-to-Peer (P2P) traffic will most likely skew these
engineering rules and QoE work is ongoing to study the impact of P2P on
provisioning and QoE.
Network Impairments BudgetingDelay/Jitter and Distortion
In order to meet the desired QoE targets, an impairment budget planning
exercise needs to be performed in order to determine the available margins.
For each level of voice service quality, the margin for allocation is
determined by comparing the proposed packet infrastructure with a
designated benchmark (PSTN or wireless). Impairments can be classified
into two distinct classes: delay/response time and distortion, where
distortion would include packet loss, echo and speech compression.
Delay budgeting
The delay margin for voice can be derived by either using the E-model 3R
delta rule or by using a hard limit maximum one-way delay as a threshold
point that meets the desired QoE. The absolute delay threshold technique
would most likely be appropriate for UDP-based data services. It should be
pointed out that the delay budget should be done from an end-to-end
perspective; that is, that no one gets to use it all. From a pragmatic
perspective, that means that the impairment budget should be allocated
across all of the elements of a connection (see the next example).
Example 6. Determination of delay margin using the 3R rule for a
POTS-to-POTS call going through a packet core is shown on Figure
20-15. Delay impairment sources include PSTN switching offices
(End Office and Tandem Office) plus propagation delay, circuit-to-
packet media gateway (PVG) for voice encoding and decoding, and
IP core routers switching and queuing. The delay margin is
computed by finding the intersection point on the E-model where
the 3R line crosses the reference connection. The delay/jitter
Li nk Bandwi dth Requi rments
0.0
10.0
20.0
30.0
40.0
50.0
60.0
70.0
10 100 1000 10000
Number of Users (log scale)
B
a
n
d
w
i
d
t
h
(
M
b
i
t
s
/
s
)
Optimal QoE Acceptable QoE Lower QoE
A network with 100 users
requires 6.2 Mbits to deliver
optimal application QoE
(green curve).
Other QoE levels (yellow and
pink curve) can be satisfied
with lower bandwidth
Number of
Users
Optimal QoE Acceptable
QoE
Lower QoE
10 1.3 0.5 0.3
50 3.6 1.6 1.2
100 6.3 2.8 2.0
200 9.5 5.2 4.0
500 20.0 13.0 10.0
1000 35.0 25.0 20.0
2000 64.0 50.0 40.0
Total Link Bandwidth Requirement
(Mbits/s)
Chapter 20 Achieving QoE: Engineering Network Performance 485
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
margin of 95 ms is derived by subtracting the maximum allowable
delay for a 3R degradation (167 ms) minus the sum of intrinsic
fixed delay (propagation, IP media gateway, or processing) of 72
ms.
Figure 20-15: Delay source components of POTS-to-POTS call going
through a packet core
Figure 20-16: Delay impairment with E-model PSTN reference and
replacement packet solution R target
Note in Figure 20-16 that the 3R represents the voice target for equivalent
QoE performance to the reference PSTN.
A second method for delay impairment budgeting is to use an absolute
delay target limit. For conversational voice services, ITU G.114 suggests
an upper limit of 150 ms of one-way delay for wireline call. This 150 ms is
a safe target that can be used for some wireline calls (POTS-POTS) and
short wireless calls (POTS-Mobile). When propagation delay approaches
150 ms, such as on very long international calls or mobile-to-mobile calls,
then it might not be possible to use this target but still offer PSTN voice
quality services. It should be noted that for those ultra-long international
POTS
EO TO
POTS POTS
EO EO TO TO POTS
EO TO
POTS POTS
EO EO TO TO
IP/MPLS
Transport
PVG PVG
PVG PVG
PVG PVG PVG
PVG PVG
PSTN PSTN
Circuit-to-
Packet
PRI PRI
IP IP
Packet core distance = 2000km Packet core distance = 2000km
Total call connection distance = 8Kkm Total call connection distance = 8Kkm
TDM core distance TDM core distance
TDM core distance TDM core distance
PSTN PSTN
TDM core
1 hop/1000km
Packet core 1 hop/250km
TDM core
1 hop/1000km
Circuit-to-
Packet
G.711/10ms, 0% packet l oss
50
60
70
80
90
100
0 100 200 300
Average One-Way Del ay (ms)
R
-
f
a
c
t
o
r
PSTN Ref erence Point
(R = 92.8, T = 52 ms)
Target w / maximum jitter
(R = 89.8, T = 167 ms)
Jitter Margin
(95 ms)
-3R
POTS-to-POTS
Perf ormance
94.3
POTS Access
POTS Access
IP MG Prop
486 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
connections (where propagation approaches 150 ms) or mobile-to-mobile
calls, the users expectation is not the same as a local call, so the theoretical
limit of 150 ms is not a realistic nor possible target and the 3R rule is more
appropriate. The use of 150 ms maximum delay should only be used under
specific conditions. The delay in all networks should be such that it is
achieving the QoE target requirements or similar to the network it is
replacing (PSTN wireless or Wireless). In every network, there are intrinsic
delays that cannot be changed or removed, which include propagation,
transmission, equipment switching, and use of mobile access. In addition,
there are other delay contributions that are controllable and can be
optimized to achieve a desired level of performance. These controllable
parameters include voice compression codec, packet size and network
loading, and jitter buffer size. The network loading essentially drives jitter
and packet loss.
Example 7. Figure 20-16 shows how to determine the delay
impairment margin using an absolute delay threshold for a VoDSL
voice call. First, sum up all the fixed delays for all major HRX
elements: results = 102 ms. Second, the fixed delays are subtracted
from the 150 ms delay target, resulting in an available 48 ms jitter
margin. In networks where multiple packet domains exist, the
available margin should be fractioned among all with fairness; that
is, the entire 48 ms is not allocated to the first DSL provider. A
simple approach is to split the jitter margin equally among the
various packet domains; for example, split the DSL into access A
and B. Note that jitter is not a simple additive process, but rather a
complex convolution. However, the additive method remains a fail
safe whereby the result is very conservative. Simulation is required
to determine, with accuracy, the jitter induced on a network of
queues or statistical multiplexers.
Figure 20-17: Determination of delay margin using a fix delay threshold
T
BRAS BRAS ATM SW DSLAM MG
D 20 102
Note: Home MG delays includes DSL modem frame inter-leaving correction delay + voice encoding/decoding
Legend: D: Fixed Delay (ms), T: Total end-to-end delay (ms)
30 1 40
Recommended End-to-End Delay: 150ms (including propagation delay)
5
Client
MG DSLAM
1 5
Home N/W Home N/W
Client ATM SW
DSL Access DSL Access Regional/Core
LER LER
DSL Access
Home MG +
modem+ client
Regi onal/Core N/W
DSL Access
Home MG +
modem+ cli ent
Provider A Provider B
Chapter 20 Achieving QoE: Engineering Network Performance 487
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Distortion budgeting
Conversational voice services distortion budgeting includes packet loss,
echo control and codec distortion. For a packet network replacing a
traditional TDM infrastructure, a 3R distortion margin is recommended to
produce unnoticeable degradation. Other margins can also be used,
depending on the business model and service quality expectations.
Although the 3R margin produces equivalent quality 3R margin, the margin
allocation is very small and offers very limited option and flexibility for the
controllable parameters. The codec should solely use G.711 as all other
codecs have an equipment impairment; that is, greater than 3R.
Transcoding should be eliminated as a call traverses multiple service
providers networks and/or packet island. Packet-to-packet handoff is
required to prevent additional voice decoding/encoding stages. If TDM
handoff is used between packet islands, the impairments budget is thus
fractioned (see Figure 20-18).
Figure 20-18: Impact of TDM handoff
Concatenated packet networks with TDM handoff replicates packetization,
jitter and codec impairment (transcoding or tandemmed coding),
fractioning the available budget for allocation.
Also, to meet our 3R distortion margin, very low packet loss is required
along with best practice echo control. Packet loss has a significant impact
on voice quality, even where Packet Loss Concealment (PLC) is deployed.
Many packet loss models assume a random distribution of packet loss. In
real networks, packet loss is intimately tied to jitter, both of which are
governed by congestion, and jitter is bursty. Consequently, voice output is
muted (no voice playback). Therefore, it is recommended to have
negligible packet loss, 10
3
or less, to achieve equivalent to PSTN voice
quality. In order to achieve this target, proper congestion/QoS mechanisms
need to be in place. For complete backward compatibility with voiceband
services such as FAX and Modem, stricter packet loss control is required
since these applications are more susceptible to loss. The packet loss target
for voiceband services may range from 10
3
to
10
6
.
Packet loss is a distortion element aspect for TCP-based data
applications. TCP timeout and retransmission delay will extend the total
transaction time as TCP packets experience losses. Packet loss rate has
been identified as a key QoE contributing factor; therefore, should be
Packetization &
Encoding
Packetization &
Encoding
De-jitter De-jitter
488 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
controlled within specific limit to achieve the desired QoE performance.
Table 20-6 shows the TCP-based data application packet loss requirements
for various QoE level. For example, in order to achieve an optimal QoE of
less than 200 ms for interactive application, a packet loss rate of less than
0.1% is required. This loss rate could be relaxed to 1% if a longer response
of 400 ms could be tolerated.
Table 20-6: TCP-based data application packet loss requirements for
various QoE level
Network Resources: buffer, scheduler share & bandwidth
This section describes some of network resource controls that can be
optimized in typical telecommunication/networking equipment such as
router, switches and media-gateways. Also, engineering guidelines are
provided to facilitate the configuration.
Buffer dimensioning for UDP based voice/data services
Due to the bursty nature of traffic on packet network, buffering
mechanisms are required to absorb temporary congestion and/or traffic
burst as packets are multiplexed, switched or routed. Packet queuing arises
due to contention for the scheduler bandwidth share, both by multiple voice
calls sharing the same queuing priority (intraclass), and between voice and
data packets where data packet has already started transmission
(interclass). Even when a strict priority schedule is used, buffering is
required. The buffer allocated by the routers to the voice traffic should be
such that the max queuing delay is within the available margin (as
discussed in budget allocation). On high speed linksDS3 and above
provisioning a buffer of approximately 2 ms in depth would guarantee a
Data Application type
QoE targets and Requirements
Preferred Acceptable
Interactive
(gaming, telnet...)
Response time
1
1:Response time targets are derived from ITU-G.1010 as well as from subjective
studies conducted at Nortel.
200 ms
400 ms
Loss rate
2
2:Recommended packet loss targets are based on modeling and simulation studies
conducted at Nortel.
<0.1%
1.0%
Responsive
(e-commerce, web,
http...)
Response time
1
Two seconds
Four seconds
Loss rate
2
<0.1%
0.7%
Chapter 20 Achieving QoE: Engineering Network Performance 489
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
loss ratio target of 10
6
or less. Another rule for voice buffer provisioning
is to use 1/10-1/20 packet per G.711 10 ms voice call (see example below).
Example 8. Calculate the buffer size requirements for a scheduler
share of 20% on an interface rate of OC-3 while maintaining a
maximum of 2 ms queueing delay.
Buffer size = 2 ms / 8 bit * share of rate (bits/sec) = 2 ms / 8 bit
*20% * 155 Mbits/s = 7750 Bytes.
Alternatively, the buffer size can be estimated using the one tenth
rule. If a G.711-20 ms voice call requires 100 Kbits, about 300
voice calls can be supported out of a scheduler provisioned for
31 Mbits/s (20% of an OC-3).
Buffer size ~ 1/10 x 300 voice calls ~ 30 voice packet buffers.
Therefore, approximately 30 voice packet buffers will be required
to prevent packet loss on a strict priority scheduler.
Figure 20-19: Voice buffer provisioning versus offered load and drop
probability
In a well engineered network, where load is balanced and does not exceed
the provisioned rate, buffers will be lightly utilized so the queuing delays
will be close to 0. Note that the recommended max transmit queue delay/
size is 2 ms.
Buffer dimensioning for TCP based data services
Buffer dimensioning should not be based upon packet queuing delay only,
but rather by data application QoE requirements and their contributing
factors. Data users do not care about packet delay, but care about
application performance and the relationship between the two is not linear.
Applications relying on TCP as transport protocol will not necessarily offer
faster response time with smaller buffer. The optimal buffer size for TCP/
IP-based applications is a complex function and is not based upon a single
Dr op Rate Pr obability vs Offer ed Load
1.E-06
1.E-04
2.E-04
3.E-04
4.E-04
5.E-04
6.E-04
7.E-04
8.E-04
9.E-04
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Offered Load
D
r
o
p
P
r
o
b
a
b
i
l
i
t
y
(
M
/
M
/
1
/
K
q
u
e
u
e
i
n
g
d
i
s
c
i
p
l
i
n
e
)
0.1ms Buffer Size 1ms Buffer Size 2ms buffer size
490 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
element delay, but instead a function of the number of active flows, link
speed and loss rate. For data services, the buffer size would be typically
engineered to control TCP flows loss with 1% or less (0.1% preferred) in
order to minimize TCP timeout and retransmissions. There is a trade-off
between queuing delay and packet. Small queue size (small buffer) implies
low average queuing delay; however, a small queue size does not always
lead to faster TCP application response time. TCP timeouts caused by
insufficient network buffering can actually increase response time.
Therefore, the transport protocol characteristics and its impact on QoE
need to understood before determining the queue size.
Figure 20-20: Buffer size relationship to QoE for TCP-based data
applications.
Scheduler share
The queuing delay introduced by a scheduler is greatly influenced by the
offered load and the output link capacity. Queuing theory shows that as the
scheduler load increases, the queuing probability increases, and above a
given threshold increases in an exponential fashion and becomes infinite as
it reaches 100% link occupancy (see Figure 20-21).
Total Buffering Level
D
a
t
a
Q
o
E
R
e
s
p
o
n
s
e
T
i
m
e
excessive
buffering/Queuing
delay
Optimal
Operating point
heavy loss
(under buffering)
Total Buffering Level
D
a
t
a
Q
o
E
R
e
s
p
o
n
s
e
T
i
m
e
excessive
buffering/Queuing
delay
Optimal
Operating point
heavy loss
(under buffering)
Chapter 20 Achieving QoE: Engineering Network Performance 491
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Figure 20-21: Jitter relationship versus loading for various link size jitter=
f(buffer size, link speed & loading)
Figure 20-21 shows that in a well-engineered network where load is
controlled within a given range (typically less than 90%), queuing delay
and jitter are bounded and stable. Where loading is not controlled and link
loading approaches 100%, queuing delay increases exponentially (in
practice, this is limited by the buffer size). Note also that lower speed links
have lower maximum practical operating points; that is, points above
which jitter becomes greatly inflated compared to its value in the unloaded
network.
As the link size diminishes, the maximum operating loading point
diminishes before saturation.
Routers should not allocate more than 95% of interface rate on high-speed
interface (10 Mbits/s and above) to prevent queue buildup and control
packet loss/jitter. For lower speed link, the maximum loading threshold
should be reduced as shown in Figure 20-21.
Bandwidth provisioning
The bandwidth provisioning is part of the resource allocation process by
which a certain amount of link resources will be allocated to traffic
demands to ensure acceptable QoE is delivered to the end-user. Essentially,
performing mapping of traffic flows into network links. Network traffic is
Source Jitter as a function of Voice Link Utilization and Link Speed (bits/s)
Voice +data traffic - 2 Queues stat mux, strict priority
20ms G.711 voice packets (200 bytes)
1500 bytes data packet size
0.01
0.10
1.00
10.00
100.00
1000.00
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
%Voice Loading (% of link capacity)
A
v
e
r
a
g
e
S
o
u
r
c
e
J
i
t
t
e
r
(
m
s
)
ISDN
128k
256k
512K
1M
T1
10M
DS3
100M
OC-3
0C-12
492 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
dynamic in nature due to a variety of factors, such as the number of users
varying with time of the day, bursts in traffic due to a failure and so on. An
important network management issue is how to maintain QoE requirements
in this dynamic environment. The approach shown in Table 20-7 highlights
the key steps of bandwidth provisioning for both a native IP and MPLS
implementation. The main difference between the native IP and MPLS
architecture is in the process of mapping traffic demands. In the native IP,
mapping will be accomplished by optimizing routing table routes (IGP and
OSPF), while for the MPLS, mapping will be accomplished through LSPs.
MPLS provides flexibility in mapping bandwidth demand on network
resources.
Table 20-7: Bandwidth provisioning steps for native IP and MPLS networks
Bandwidth provisioning can be done in either static or dynamic
provisioning. Static provisioning is achieved by allocating bandwidth for
the highest load over a time window and by implementing efficient QoS
mechanisms to ensure that sufficient bandwidth is available for the priority
traffic with predetermined constraints. The drawback of this approach is
that the capacity may be highly under utilized when the load is significantly
below the peak load within the time window. The other approach, dynamic
bandwidth provisioning, would solve this problem by using network
resources more efficiently, but on the other end being far more complex and
requires some centralized coordination intelligence to maintain knowledge
on link/capacity availability. Dynamic bandwidth provisioning is currently
under development and is expected to replace traditional static provisioning
as a long-term solution.
Bandwidth provisioning should also include some extra bandwidth to
include redundancy path restoration, call control, and future traffic growth.
Nati ve IP
1. Identify bandwidth demand
between edge routers voice
and signaling
2. Map bandwidth demand onto
network
1. Select IGP weights
default, e.g. hops, distance, or,
Optimized for efficient b/w utilization
2. Predict utilization of every link
1. Every end-to-end matrix
2. Simulate all possible failures
3. If step 3 identifies links utilized
more than 95%
Increase bandwidth of those links and
finish engineering, or,
Add bandwidth where feasible, and/or
economical and repeat step 2.
MPLS
1. Identify bandwidth demand for each
LSP
Depends on the selected LSP topology and
CAC
For edge-edge LSPs
Identify b/w demand between the edges
Include possible traffic fluctuations
2. Map LSPs onto network
Calculate route for each LSP
[1]
(comply with
constraints on router resources)
Calculate back-up LSPs
3. If routes for some LSP cannot be found
due to constraints on router resources
Split LSPs,
Add more bandwidth to links where constraints
would be violated and finish engineering,
Add bandwidth where feasible, and/or
economical and reroute enough LSPs to make
room.
Chapter 20 Achieving QoE: Engineering Network Performance 493
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Call control and signaling, in a typical network, represents about 1% of the
total traffic. As a safe margin, planning for 5%-10% is suggested.
Jitter buffer
The main purpose of a jitter buffer is to compensate for packet delay
variation, which would affect playout of deterministic packet sequence
such as real-time voice or video. The jitter buffer must be designed for an
expected traffic profile. That is, for dimensioning jitter buffer, the packet
interarrival delay must be known or be within predetermined bounds for
the jitter buffer depth to be adjusted within an expected range. The jitter
buffer wait time can be statically provisioned or adjusted dynamically as a
function of varying network operating conditions. To prevent packet loss
from jitter buffer overflow, the persistence of instantaneous arrival and the
average arrival rate must not exceed the available jitter buffer storage
space. Obviously, the instantaneous settings of an adaptive jitter buffer
would be a function of the behavior of the network. To minimize jitter
buffer wait time, the network should be engineered to minimize or
eliminate jitter by ensuring that the average arrival rate does not exceed a
certain percentage (utilization/loading) of the outgoing link speed of each
multiplexing stage in the connection. For voice traffic with an assumed
uniform periodic profile at point of origin and a constant bandwidth
requirement, the percentage can be as high as 90%-95%, provided the
traffic is all voice. If shared with data, voice has absolute priority over data.
Under those circumstances, where voice packets originate, traverse, and
terminate over links in excess of 10 Mb/s, one can expect induced jitter
(resultant from the convolution of the behavior of the concatenated
multiplexing stages) to be a few milliseconds at most.
Consequently, a 10 ms jitter buffer should be sufficient.
Jitter buffers can and should be sized independently of a packet
size.
Where those circumstances do not prevail (that is, network loading is not
controlled or bounded), there is little that can be said about determining the
correct jitter buffer settings. Without any firm expectation of the
instantaneous and average traffic profile (that is, knowledge of the total
traffic admitted to the network and its load balance across the network),
then the probability of unbounded persistence and uncontrolled average
loading is increased, but also unbounded. No size of jitter buffer in any
router or receiving media gateway can be considered big enough. One
should always engineer a managed network for well-behaved normal
operation, with a sufficiency of controls and monitors, and a capacity
suited to demand. Jitter buffer wait time of a few milliseconds will have no
significant impact on voice quality, and fully absorbs the packet delay
variation of packet switching.
494 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Delay distribution across an IP backbone
Figure 20-22 shows an Internet trace capture of an unmanaged IP
backbone. An unmanaged IP network operates typically in a best effort
mode with no QoS mechanisms or congestion management and thus no
performance guarantee. Studies by Nortel and others have concluded that
unmanaged IP networks can exhibit widely variable behavior. Delay
across unmanaged IP network can range from a few milliseconds during
uncongested periods to hundreds of milliseconds during congestion peaks.
Under normal conditions, you might see delay across this path of about 50
ms with about 10 ms jitter. Unstable periods and jitter bursts up to 300 ms
most likely result from excessive loading on some links. Impairment to
QoE (quality of experience) on voice calls across such a network would
result in unsatisfactory service, even where jitter buffers are capable of
compensating for highly variable jitter.
In contrast, a managed IP network can offer service level guarantees
because implementation of congestion control and efficient QoS
mechanisms keep delay, loss, and jitter within acceptable bounds. This
consistent behavior delivers the desired voice QoE. In a typical managed IP
network, the traffic load at each node and/or switch/router is controlled
within specific limits to prevent the queue build-up that leads to queuing
delay, packet loss, and jitter. A managed IP network controls or eliminates
congestion through careful selection of QoS mechanisms and end-to-end
traffic engineering.
Figure 20-22: Delay distribution across an IP backbone
The graph in Figure 20-22 indicates that under normal condition, the delay
across this path is about 50 ms with approximately 10 ms jitter. This
network experiences some unstable periods where jitter increases up to 300
ms.
The effectiveness of an adaptive jitter buffer and packet loss concealment is
limited by the magnitude of network-induced impairments. Based on
simulation and measurement results, limiting factors include peak delays
Chapter 20 Achieving QoE: Engineering Network Performance 495
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
that exceed real-time voice delay targets (150-250 ms), as well as the
highly bursty nature of packet loss distribution.
If an adaptive jitter buffer wait time is deployed, it must be able to adapt to
the wide range of jitter distributions that are typical in todays IP networks.
Adaptation schemes may not perform well with all distributions of jitter.
Tuning of the adaptation algorithm may be necessary to match the delay
variation characteristic of a network. Tuning can be done by adjusting the
weighting used for the calculation of the moving averages and the
thresholds (sensitivity) to the occurrence of spikes in the delay variation. A
single setting for these parameters that works for all traces may not be
feasible. Note that the long-term average packet loss rate and jitter are in
many cases misleading, as they hide transient events that are only visible
on short time scales. Packet loss periods and delays span several orders of
magnitudedistributions of loss and delay bursts have heavy tails. Where
more than 4060 ms of speech are lost, there is no longer sufficient
information to reconstruct the speech. This places a hard limitation on the
effectiveness of packet loss concealment techniques.
SummaryNetwork Engineering Guidelines
In order to maximize the end-to-end QoE performance, it will be desirable
to have traffic management and QoS mechanisms that control real-time
applications QoE contributing factors and QoE aware QoS mechanisms.
Some of the contributing factors highlighted in this chapter require end-to-
end control, which differs from nodal-only control performed by
mechanisms such as scheduler or active queue management. Nodal
scheduling mechanism alone is not sufficient to prevent congestion buildup
and application performance QoE degradation.
Figure 20-23: QoE QoS Summary
Additional traffic engineering end-to-end mechanisms are required, such as
end-to-end flow admission control as well as efficient active queue/buffer
management techniques or over-provisioning. It is expected that once
efficient traffic engineering is in place, the requirements for QoS
mechanisms can be greatly reduced.
Nodal QoS
Mechanisms
QoE Contributing
Factor Optimization
Centralized/End-to-End
QoS/TM Mechanism
Scheduling
Classification
Rate limiting & Policing
Flow admission control
Packet loss rate
Buffer size
Link speed
Network delays (RTT)
Call/Flow admission control
(voice calls, TCP sessions)
BW reservation
Centralized coordinator
From Soft to Hard QoE
496 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
QoS mechanisms should be engineered and designed to address the
fundamental real-time related issues and most critical QoE contributing
factors as identified in this chapter. In order to deliver hard QoE, more
advanced mechanisms, which may not be commercially available today,
(such as flow and/or call admission control) would be required. Figure 20-
23 summarizes the traffic management/QoS mechanism requirements for
various levels of service quality guaranteed. Nodal QoS mechanisms alone
are insufficient to provide hard QoE; efficient end-to-end/centralized QoS
mechanisms are also required.
Chapter 20 Achieving QoE: Engineering Network Performance 497
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
What you should have learned
Voice and call control traffic must be given the highest scheduling priority
than other services. Shorter packetization delay (frame length) is better.
Packet loss must be randomly distributed and negligible. All handoffs must
be packet based. TDM handoff between multiple packet domains incurs a
distortion penalty.
Submarine links might be problematic as they introduce compression
codec and could lead to transcoding. Wireless-to-wireless international
calls should use Transcoder Free Operation or interwork between wireless
standards directly. Codec selection and speech distortion should be
controlled to ensure no more than 3R between reference and equivalent
quality replacement solution. Compression incurs a distortion penalty. If
compression is required on international submarine links, then it should be
invoked once and used end-to-end (that is, no tandeming G.726-32 with 10
ms is accredited).
The echo control method should account for connection delay and loss/
level plan. The jitter buffer should be designed for an expected traffic
profile and should be engineered independently of the packet size.
Congestion at each node must be controlled to ensure statistical packet
loading is below 95%. If routing is based on a default IGP metrics,
bandwidth must be added to links that could ever be more than 95%
occupied. Routers should not allocate more than 95% of their interfaces to
the voice and call control traffic. Link weights or route tables might be
optimized for more efficient use of network bandwidth. LSP bandwidth
engineering should be optimized based upon on the codec, packetization,
and whether CAC is available.
Selection of QoS mechanisms selection should consider their impact on
QoE, along with the service level guarantees (soft vs. hard QoE). Buffer
size for routers should be engineered with 2 ms buffer depth to ensure
packet loss rate of less than 1 X 10
6
. Jitter must be controlled and within
the end-to-end delay budget10 ms is recommended.
498 Section V: Network Design and Implementation
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
499
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Section VI
Examples
The material presented in previous chapters has been mostly theoretical or
hypothetical. In this section, we offer some concrete examples of network
architectures and configurations. Chapter 21 provides a look at a specific
large Enterprise network; namely, Nortels own corporate network, which
we use as a proving ground for our equipment. Chapters 22 and 23 provide
a contrast between the perspectives of the Public Carrier and Private
Enterprise view of real-time networking. Chapter 24 describes the
implementation of Real-Time Control applications in the video space.
While Enterprise networks and Carrier networks generally rely on the same
technologies and QoS protocols, their business environments are
completely different. A Carrier's network and the features it supports
constitute services to be sold. An Enterprise network is generally a
constrained resource and a business tool. These different perspectives
create different challenges and require different strategies to leverage and
implement network, communications, and application convergence.
When a Carrier implements a VoIP, multimedia, and/or converged network,
it works through a process of determining what level of service to provide,
the size of the target market, and which of the available technologies to
implement. Sophisticated simulation tools are used to determine network
requirements and to predict performance. The Carrier then implements a
carefully defined solution within a well-understood usage environment and
operating conditions, and monitor the loads the network carries to ensure
that it does not exceed the load it was designed for.
Typically, the Enterprise situation is the complete inverse. An Enterprise
usually starts with a network that was designed for data. The network is not
well-documented or well characterized. Enterprises consider voice and
real-time multimedia as simply additional data applications. And while
they assume that bandwidth solves all problems, they are also continually
looking for ways to reduce bandwidth and constrain their network growth.
These examples provide contrasting perspectives on how the principles set
out in the previous sections can be used to achieve various network
performance and user quality targets.
500
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Disclosure Statement
The business case scenarios and examples used in the following chapters
are intended for illustrative purposes only. These case examples represent
potential results based on certain assumptions, which may not take into
account all factors potentially affecting results. If actual operating factors
differ from assumptions made, actual results may vary. Specific customer
operating factors such as deployment scenarios, actual growth rates, and
competition could cause actual results to vary compared to other
customers.
501
Copyright 2004 Nortel Networks Essentials of Real Time Networking
Chapter 21
VoIP and QoS in a Global Enterprise
Sandra Brown
Rob Miller
Shane Fernandes
Gwyneth Edwards
The following example reflects facts and figures obtained during a case
study performed by Nortel in the Spring of 2004. All information
financial, people, technicalcorresponds to Nortels environment at the
time of the case study. As applicable, facts will be denoted by Case Study
Figures.
This chapter describes a real life implementation of Quality of Service
(QoS), in two parts:
Voice over IP: Raising the need for Quality of Service
The Quality of Service (QoS) design
Voice over IP: Raising the need for Quality of Service
The introduction of Voice over IP at Nortel
Nortel is a company of 35,000 employees spread across 150 countries and
more than 240 locations. As the company deploys its own technology
across the corporate network, typically in alpha or beta stage, front-running
technologies are usually implemented at Nortel first. Voice over IP (VoIP)
is no exception. With the intent to provide mobile workers (executives,
sales teams and teleworkers) with a low-cost, flexible IP voice solution,
initial trials of VoIP began in 2000. As outlined in Chapter 19, VoIP is an
interactive application that requires little to no delay or jitter because the
user expects the same quality as standard TDM-based voice solutions. This
requirement drove the implementation of Quality of Service (QoS) at
Nortel.
Nortels real-time network
To appreciate the complexity of implementing QoS at Nortel, it is
important to understand the scope of the Enterprise network. This section
provides an overview of the application and network environment, as was
indicated in the Nortel 2004 case study.
502 Section VI: Examples
Essentials of Real Time Networking Copyright 2004 Nortel Networks
The applications
Nortel runs one of the largest real-time Enterprise networks in the world,
equivalent in breadth and scope to a Tier 2 Service Provider. In a typical
month, more than 1,500 terabytes of routed data traffic runs across the
network, headed for one of the 2,700 computer servers. By comparison, the
books in the U.S. Library of Congress, the worlds largest library, contain
about 20 terabytes of text.
The IP network carries data from a variety of sources, grouped in the table
below by traffic category:
Table 24-1: IP network data sources
Traffic
Category
Application Monthly Usage
(as indicated in the Nortel 2004
Case Study)
Network
Control:
Network management
operations, maintenance
and engineering
applications.
The network moves more than
15,000 terabytes of data
globally every month. Growth
over the past 6 months is at
13%
Interactive VoIP for mobile and
work at home
employees (24,000+),
global video
conferencing
12 million minutes of public
voice
19 million minutes of private
packetized voice
17.1 million audio conferencing
minutes
80,000 audio conferences
Responsive Live and on-demand
global webcasts for
employees and
customer e-learning;
global business and
design applications
107 webcasts
407K webcast minutes
280 Gigabytes streamed
Largest Clarify deployment
worldwide *(according to
Gartner)
Timely E-mail, file transfer 39 Million Messages
Chapter 21 VoIP and QoS in a Global Enterprise 503
Copyright 2004 Nortel Networks Essentials of Real Time Networking
The network
The following network description is as described in the Nortel 2004 Case
Study. Nortels Enterprise network is based on a backbone architecture split
into four Border Gateway Protocol (BGP) regionsEurope, Americas,
Asia and Indiaeach of which is assigned an Autonomous System (AS)
for Internet connectivity and transport. Routing is done hierarchically
through the core, distribution and access layers. Interregional routing is
based on OSPF routing principles and all regional traffic traverses the core.
Between major campuses, the Wide Area Network (WAN) runs over
Optical SONET technology, much of which uses Optical Ethernet. Some
small offices are also connected to the WAN through SONET although
many are connected through Asynchronous Transfer Mode (ATM), Frame
Relay (FR) and Virtual Private Network (VPN) links.
Over the past few years Nortel collapsed literally hundreds of private
virtual lines, frame relay and ATM circuits, public and private voice onto
the converged core and moved much of the public voice onto the private
network. Upgrades to VoIP were done on the line side and are now
evolving to H.323 and SIP trunking.
Figure 21-1 illustrates an overview of the companys real-time network
architecture.
Figure 21-1: Nortels real-time network architecture
504 Section VI: Examples
Essentials of Real Time Networking Copyright 2004 Nortel Networks
The Voice over IP business case
1
The Voice over IP Business Case represents facts and figures taken from the
Nortel 2004 Case Study. The introduction of any new technology must be
substantiated by a business case. It is with this in mind that the Information
Services (IS) team approached the deployment at Nortel. In lieu of simply
rolling out Voice over IP throughout major sites and sales offices, Nortel
based the implementation on the business needs of its mobility users. More
than 24,000 employees (two thirds of the employee population) have
mobility requirements, whether they are traveling across the world,
traveling to the next city, or working from home.
Mobility at Nortel
Mobility in the work place has become a standard requirement; however,
Nortel has led the industry in the mobility of its employees. Through secure
VPN solutions, for more than a decade employees have been accessing the
network remotely to perform their work. Therefore, leveraging current
investments and the installed base was a key driver in the deployment of
VoIP; the IS team wanted to enable the mobile worker to be more
productive and more connected than ever before.
The mobile employee
As per the Nortel 2004 Case Study, mobile employees at Nortel can be
grouped into three major categories: teleworkers, executives and
salespeople.
Teleworkers. More than one third (12,000) of Nortels employee
population telework either full or part time. Approximately 3000
employees globally work from home and may have high long distance
costs depending on the location of their colleagues. The remaining group of
employees holds space within a Nortel building, either private or shared,
and work from home occasionally. The latter work from home either
through their own choice or due to situations such as inclement weather or
even major blackouts. Many of these employees do not have standard home
setups, such as 1-700 voice services.
Executives. This group at Nortel is highly mobile due to the global nature
of the company. They have tight schedules, are often on the road and need
to be accessible at all times. They typically make 70% of their calls to
Nortel colleagues.
Sales force. The sales team is not only mobile most of the time, but they
must also be able to demonstrate the application of technology that Nortel
holds in its portfolio. The sales employee needs access to the corporate
1. Please refer to the Disclosure Notice in the Section VI Introduction (page 498).
Chapter 21 VoIP and QoS in a Global Enterprise 505
Copyright 2004 Nortel Networks Essentials of Real Time Networking
network at any timefrom a hotel, an airport or a customer site. Typically,
40% of the sales persons calls are to other Nortel employees.
Voice over IP benefits
Prior to the implementation of VoIP, mobile workers at Nortel were using
POTS, cell phones, long distance, and calling cards for their voice needs.
According to the Nortel 2004 Case Study, they now leverage Nortel VoIP
solutions through the companys real-time network to contact colleagues
the world over, reducing both operational and capital costs in the range of
millions of dollars per year.
The business case, based on the profile of the mobile workers listed in the
previous section, provided a payback of eight months and a net present
value of $18 million. Costs included LAN upgrades to support QoS, WAN
upgrades to support QoS and ensure bandwidth, voice switch upgrades,
headsets and IP handsets, and incremental support costs for maintenance,
help desk support, technical support and incremental depreciation.
Future Voice over IP applications
Voice over IP, aside from replacing standard voice services, has positioned
Nortel for the introduction of other interactive applications including e-
learning, video conferencing, application sharing, operator services and
contact center support, to name just a few.
Voice over IP implementation strategy
Challenges of delay and jitter
As described in Chapter 3, successful deployment of VoIP is complex;
challenges cannot be resolved simply by expanding bandwidth. The
solution must balance appropriate bandwidth, QoS and network
management to enable VoIP to operate effectively, with the performance
expected of an interactive application. For example, network management
and troubleshooting of VoIP on a converged network is considerably more
complicated than on a Time Division Multiplexing (TDM) network, as the
former contains other traffic aside from voice.
As an interactive application, voice is sensitive to delay. For an IP data
network, typical application (end-to-end) delay may be 300400 ms. This
is acceptable for normal nonreal-time data applications; any variation in
the delay is experienced by the user as variable response time. However,
delays this large on a VoIP call will destroy simultaneity at the two ends of
the call, adversely affecting turn-taking that is typical of a conversation,
and making it difficult to interrupt. It can significantly influence one users
perception of the politeness, honesty, intelligence, or attentiveness of the
other user. Interactive applications, such as voice, require a delay
performance equal to or less than 150 ms end-to-end.
506 Section VI: Examples
Essentials of Real Time Networking Copyright 2004 Nortel Networks
Jitter in the IP layer can impair the voice channel. As noted in Chapter 3,
jitter is the variation in packet arrival: moment-to-moment changes in
network traffic and loading affect the transit times of individual packets.
VoIP and other real-time applications cannot be queued without increasing
the end-to-end delay, which degrades the application performance.
Network engineering and network management need to keep jitter low to
maintain quality for delay-sensitive applications.
So jitter and latency values are very important, and in this example, they
were the key drivers for implementing QoS.
Note: Quality of Service (QoS), as defined in Chapter 2, refers to a
set of technologiestraffic management and QoS mechanisms
that enable the network administrator to achieve the desired traffic
performance targets. We assume that, in this example, Quality of
Experience (QoE) for VoIP calls is equivalent to that of PSTN
service.
Voice over IP implementation
Beginning in 2000, Nortel implemented IP Line on their Meridian 1
switches, enabling Voice over IP within the LAN. This prepared the
network to serve the targeted user base of mobile workers: the Meridian 1s
were upgraded to Release 3.0 and a signaling server was added to
transform the M1 into a Call Server 1000. Then the Multimedia
Communications Server (MCS) was turned on for that office.
This evolutionary approach was taken so that the current infrastructure
could be leveraged and payback would be a year or less.
Lessons learned about Voice over IP
Nortel deploys its own technology within the Enterprise, usually before the
products and solutions are generally released. The implementation of VoIP
was no exception; however, large scale deployment began during the
telecommunications down-turn. In hindsight, this provided the IS team at
Nortel an opportunity to ensure that business benefits were realized at
every step. According to the Nortel 2004 Case Study, the lessons learned
are, therefore, applicable for many Enterprises considering VoIP
deployment.
A mobile workforce creates a strong business case for VoIP.
Although productivity benefits and infrastructure savings are
important factors, they may not carry the business case.
A high level of security is required to ensure that viruses, worms
and other attacks do not cripple the data network, especially since
voice is now running across it; the architecture must be secure end-
to-end.
Chapter 21 VoIP and QoS in a Global Enterprise 507
Copyright 2004 Nortel Networks Essentials of Real Time Networking
Managing technology refresh with decreasing or flat budgets
demands a positive business case including a short payback period.
The business case must leverage existing investments; technology
cannot be ripped out and replaced.
The IS team must adapt new engineering and operating processes to
support a real-time IP network. Training and education is
fundamental.
Organizations must move to an application-aware network with
QoS.
The Quality of Service (QoS) design
The following QoS strategy and architecture reflects the findings from the
Nortel 2004 Case Study.
Quality of Service, an immediate VoIP requirement
The extensive implementation of Voice over IP within Nortel, coupled with
the nature of IP traffic, with its spikes and intermittent saturation, required
Quality of Service (QoS) at the early stages of implementation. QoS
ensures a consistent level of service for an isochronous, interactive
application such as voice, avoiding the jitter and delay problems that
manifest into less than acceptable Quality of Experience (QoE) for the user.
The Nortel IS team established that employees would have the same
Quality of Experience (QoE) expectations for the VoIP service as they did
for the service it was replacing (the current PSTN). This implied that the
design of the QoS solution had to meet the same end-to-end network
performance targets established for PSTN service (as defined by User QoE
Performance Targets), as follows:
delay equal to or less than 150 ms
no jitter
packet loss of 10
3
The QoS strategy
Nortels IS team deployed a QoS design that uses DiffServ Code Point
(DSCP) values for different services and applications, based on Nortel
Networks Service Classes (NNSC), as outlined in Chapter 19. Voice over
IP, an interactive application, is assigned into the higher service classes,
based on its very low tolerance for loss, delay and jitter. At Nortel, the
middle NNSCs are used for priority data applications, such as multicast
and business critical client/server applications, and the lowest NNSCs (for
example Bronze) are used for applications such as e-mail. Please refer to
Table 21-2 for an overview of the DSCPs used.
508 Section VI: Examples
Essentials of Real Time Networking Copyright 2004 Nortel Networks
Table 21-2: Nortel DiffServ Code Points used within the network core
QoS was implemented based on the size of and infrastructure at the site.
The following sections will provide an overview of the architecture at the
small office, the core and at the Local and Medium Area Network (LAN
and MAN).
Please refer to Figure 21-2 for an overview of the site QoS architectures.
Figure 21-2: QoS architecture by site
Application DiffServ Code Point
Network Control DSCP 56 (nc2)
Voice DSCP 46 (ef) + 40 (cs5)
Video DSCP 38 (af41 and af43)
High Priority Data DSCP 30 (af33)
Low Priority Data DSCP 22 (af23)
All other data Not-tagged
Chapter 21 VoIP and QoS in a Global Enterprise 509
Copyright 2004 Nortel Networks Essentials of Real Time Networking
Small office QoS architecture
The following small office QoS architecture reflects the findings from the
Nortel 2004 Case Study.
Protecting investment at the small office
The small office QoS architecture is discussed in detail, as this design was
based on a desire to preserve investments in existing hardware, and
therefore, involved adaptation of existing frame relay networks. This
approach allows for the evolution within the Enterprise to an affordable
QoS solution.
Nortels small offices use a hub and spoke architecture, where the small
offices hub into a larger central office. The challenge with implementing
Voice over IP at the small office, onto the existing data network, arises
from the competition between smaller voice packets and large data packets.
By nature of design, the smaller voice packets will sometimes arrive behind
the larger data packets. Even where the voice packet is given priority in the
queue, where the data packet has already begun transmission, the voice
packet will have to wait. This results in increased jitter, and an increased
chance that some voice packets will arrive outside the acceptable jitter
buffer timer of the voice applications. One or two lost packets may result in
distortion of the VoIP output, and longer bursts of packet loss will result in
the muting of the output signal. Quality of Service is needed to prevent this
problem.
The small office QoS solution
Nortels small offices use BayRS routers. Given that a key component of
the QoS strategy was to leverage existing investment and avoid new
hardware purchases, a creative, inexpensive, and effective QoS strategy
was conceived to resolve the issue outlined above. By taking advantage of
NNSCs, DSCPs were mapped to ATM and frame relay service categories
so that QoS could be achieved between the small and larger offices. Figure
21-3 shows the QoS architecture for the small offices.
510 Section VI: Examples
Essentials of Real Time Networking Copyright 2004 Nortel Networks
Figure 21-3: Small office architecture design for Quality of Service
Hub-site ATM conversion
At the hub site, a BN router with an ATM card is used to concentrate all the
remote small office frame relay links. The carrier service provider performs
FRATM conversion (converting frame relay to ATM), so that ATM access
can be used at the hub site and frame relay at the remote sites. Nortel then
uses a Passport 7400 to direct the remote sites ATM virtual circuits to the
hub sites BN ATM interface. The Passport also provides the required ATM
QoS functionality by mapping Nortel Networks Service Classes to ATM
Service Categories, specifically rt-VBR for the voice packets.
Traffic shaping
At the small office, two frame relay Data Link Connection Identifiers
(DLCI) are used to separate Voice over IP from all other data application
traffic across the Wide Area Network (WAN), with higher priority given to
the VoIP traffic. To direct the traffic to the appropriate DLCI, Forward Next
Hop (FNH) filters based on the DSCP/TOS bits within the IP header are
implemented on the Ethernet ports as the IP traffic ingresses the router
ports. BayRS Protocol Priority Queuing (PPQ) is implemented at the small
sites router.
Chapter 21 VoIP and QoS in a Global Enterprise 511
Copyright 2004 Nortel Networks Essentials of Real Time Networking
The VoIP DLCI is a shaped DLCI and the Data DLCI is unshaped. By
default, shaped DLCI traffic is prioritized over unshaped DLCI traffic. All
VoIP signaling and media path messages are assigned an Expedited
Forwarding (EF) DiffServ Code Point (DSCP) of 46.
To prevent traffic congestion on one DLCI from causing packet drops on an
uncongested DLCI, clipping is enabled on the ATM interface card that sits
on the BN router. Clipping enabled implies that the BN ATM card will
drop all packets in excess of the frame relay Sustainable Cell Rates (SCR)
and Peak Cell Rate (PCR) values. The data circuit will typically drop
packets first because data is bursty in nature, thus ensuring that the VoIP
traffic is not dropped by the BN router.
Leveraging the Passport 7400
In this small-site design, the Passport 7400 provides two key functions:
It allows for the consolidation of all services onto the ATM access
at the core site.
It provides the ATM QoS functionality required to support voice
and data to remote locations.
With the use of ATM, both large and small packets are segmented into
smaller 53 byte packets (ATM standard). Smaller packet sizes help to
resolve the issue that exists on serial links with smaller voice packets being
queued behind larger data packets. The Passport 7400 provides the ATM
Sidebar: BayRS protocol priority queuing
The queuing process
With protocol prioritization enabled on an interface, the router sends
each packet leaving an interface to one of three priority queues: high,
normal or low. The router automatically queues packets that do not
match a priority filter to the Normal queue. To send traffic to the other
queues, the network engineer can create outbound traffic filters that
include a prioritizing action. These are called priority filters.
The dequeuing process
After queuing packets, the router empties the priority queues by sending
the traffic to the transmit queue using one of two dequeuing algorithms:
Bandwidth Allocation Algorithm
Strict Dequeuing Algorithm
By default, protocol prioritization uses the bandwidth allocation algorithm to
send traffic from the three priority queues to the transmit queue.
512 Section VI: Examples
Essentials of Real Time Networking Copyright 2004 Nortel Networks
QoS functionality by inserting the higher priority voice packet flow in
between the lower priority data packets, thus reducing VoIP delay and jitter
variation.
With the use of the Forward Next Hop (FNH) filters, the VoIP and Data
traffic have been separated into two different virtual circuits as the traffic
egresses the BN and ingresses the PP7400.
All Voice over IP traffic from the BN arrives on a PP7400 virtual
circuit with an ATM service category of rt-VBR.
All Data from the BN arrives on a PP7400 virtual circuit with an
ATM service category of nrt-VBR.
Both virtual circuits pass through the Virtual Path Terminator (VPT) that
shapes the combined traffic to the access speed of the remote ends link
speed.
As the ATM traffic passes through the PP7400s, VPT priority is given to
the rt-VBR VCC (voice traffic). If the combined total of the two VCCs is
higher than the VPT shaped value, the nrt-VBR (data) traffic will be
buffered and dropped.
Future QoS implementation
Nortel has found the above design to be an effective and inexpensive means
to provide QoS for VoIP to its small office sites. Applications such as video
and streaming are currently placed into the data queue and are not
prioritized in a high queue.
Future evolutions of the Nortel design to move towards higher capacity and
consolidated network designs will use next generation Nortel products.
QoS will be enabled simply as a by-product of this evolution to new
network capability, and multiqueue QoS will be implemented at that time.
Nortels core network QoS strategy
The following Nortel core network Qos strategy reflects the findings from
the Nortel 2004 Case Study. The Nortel core is defined as the optical
network, which provides WAN connectivity between the major Nortel
campuses. The core network connects between offices in Texas and North
Carolina, USA; Brampton and Ottawa, Canada; Chteaufort, France;
Maidenhead, England; Beijing and Hong Kong, China; and, Chatswood,
Australia.
Chapter 21 VoIP and QoS in a Global Enterprise 513
Copyright 2004 Nortel Networks Essentials of Real Time Networking
The existing Nortel core router architecture supports an eight queue QOS
design, three of which are currently used, as follows:
Voice is prioritized into the highest queue
Video and multicast into the next queue
All other traffic into a lower queue.
The optical network is fronted by a Passport 7480, which is used to convert
the traffic into ATM packets. Please see Figure 21-4 for an illustration of
the core QoS strategy.
Figure 21-4: Core QoS strategy
Nortels LAN/MAN strategy
The following Nortel LAN/MAN strategy reflects the findings from the
Nortel 2004 Case Study. QoS is required not only in the WAN, but also on
the local area network, in order to provide an end-to-end service. Using
Nortel Networks Service Classes, the IS team enabled QoS on the BPS,
Baystack* 460/470, and Passport 7400 products to allow them to prioritize
and pass on the DiffServ Code Points (DSCP) to the next device up the
chain.
514 Section VI: Examples
Essentials of Real Time Networking Copyright 2004 Nortel Networks
Lessons learned
For companies wanting to move to a real-time network, the following
lessons learned can assist in the implementation of Quality of Service
(QoS):
The QoS strategy should be driven by the needs of the applications
that run over the corporate network.
The applications must be categorized by traffic category to
determine the QoS requirements and priorities (let Quality of
Experience drive the categorization).
Consider not only current interactive applications but also future
applications that will run across the real-time network.
Even if QoS is implemented on a site-by-site basis, a complete QoS
strategy should be defined prior to deployment to ensure that the
network becomes real-time end-to-end.
To minimize costs, leverage current infrastructure and managed
services, especially in the small sites.
To simplify QoS implementation, take advantage of the DiffServ
Code Points (DSCP) mappings to Nortel Networks Service Classes
(NNSC) and the service categories of the various transport
technologies such as IP, ATM and frame relay.
515
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Chapter 22
Real-Time Carrier Examples
Edited by Kathy Joyner
The telecommunications marketplace has changed radically in the past few
years, and the rate of change is exploding rather than levelling off.
Consumers have quickly adapted to the ability to communicate any time,
anywhere, but are demanding integration of the ever-increasing number of
communication devices. As well, consumers are demanding rich new
features to personalize and enhance that communication, and increase
mobility. Enterprises have demands of their own, driven by the need to
increase productivity and concentrate on their core business.
In addition to being faced with these challenges, wireline service providers
are coping with ever-increasing competition, brand erosion, and financial
pressures. Revenue from traditional wireline services is nearly flat, while
less profitable services are thriving (RHK, Telecom Economics Update,
Nov. 2003). Ubiquity is driving down the cost of Internet Protocol (IP)-
based services, even as popularity and demand for these services grow.
Traditional service providers are losing ownership of their traditional
customer base as wireless service replaces traditional wireline service, and
broadband service erodes the market for second lines.
The next wave of revenue generating services can be achieved only by
combining new services with an intelligent, service-aware network. Nortel
is ready to help carriers achieve their objectives, including convergence of
their networks and value-rich, revenue-generating services.
This chapter provides a number of examples of the changes facing the
telecommunications industry and some solutions to challenges caused by
those changes.
Centrex IP
An April 2003 InfoTech* report entitled Enterprise Convergence: The
Race for IP Telephony Supremacy shows that by the end of 2004, over
seventy percent of U.S. Enterprises will have implemented IP Telephony in
at least one site. This escalation in demand for IP Telephony is driven by
the desire to reduce costs while retaining the ability to enhance employee
mobility and increase worker productivity.
It is not surprising then that Anycarrier.com has experienced a five percent
decline in its Centrex customer base over the last eighteen months. In fact,
research by IDC (2003) shows that service providers as a whole are seeing
516 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
their Centrex installed base eroding by three to twelve percent per year
(depending on segment and service provider).
Understanding that the erosion of its Centrex customer base is only going
to accelerate, Anycarrier.com initiated a study to find a solution that would
allow it to retain its current Centrex customers and also add high-demand
VoIP services as part of a comprehensive product offering. Based on the
results of that study, Anycarrier.com determined that Centrex IP is the only
solution that provides it with an evolutionary approach to VoIP, allowing it
to retain its existing Centrex revenues while providing a platform for new,
IP-based business services.
Technical challenge
With Centrex IP, Anycarrier.com can offer the reliability and rich feature
set of hosted Centrex in conjunction with the next-generation services of
VoIP, allowing it to retain and grow its Centrex base. Because Centrex IP
builds on the industry-leading business voice benefits of Centrex,
businesses can take advantage of the benefits of IP Telephony in a flexible,
cost-effective and low risk way. Key market segments are as follows:
Existing Centrex Base. Companies who are already seeing the
benefits of the full feature set and reliability of Anycarrier.coms
Centrex service are the prime target for the move to Centrex IP.
Medium/Large Enterprises. Medium to large companies across
many industries also provide a great opportunity to introduce
Centrex IP. In many cases, these companies have already
considered implementing VoIP in some fashion and may have a
budget set aside for that step. In addition, these companies are also
under competitive pressure to increase employee productivity by
providing better communications, while simultaneously lowering
operating and IT costs.
Within the Medium/Large Enterprise segment, those companies with the
following characteristics are the primary targets for Centrex IP services:
New Branches. Companies that are opening a new branch or site
for their company. The branch will need to be set up with a cost-
effective extension of services to connect and communicate with
the main corporate site.
Major Renovations. Companies that are overhauling/rebuilding
their office space or telecommunications systems. This renovation
may provide the opportunity to upgrade their telephone and LAN
infrastructure.
Small Businesses. Small businesses across many different
industries are another potential opportunity for Centrex IP services.
These companies typically make changes more quickly and easily.
Chapter 22 Real-Time Carrier Examples 517
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Like their larger counterparts, small businesses also feel the
pressure to increase employee productivity, while maintaining
maximum financial flexibility.
Solution
Network diagram
From a high level perspective, Centrex services can continue to be offered
from either existing switch platforms or from newer call server platforms.
The diagrams below illustrate how the migration to an IP-based Centrex
service can be accommodated.
Figure 22-1: Centrex IP network diagram
Architecture overview
With Nortels Centrex IP solution, Anycarrier.com can offer full-featured
Centrex services over an IP infrastructure using two primary components:
Centrex IP Client Manager with a DMS*-100/5000 or a
Communication Server 2000
IP phones
Key elements
Centrex IP Client Manager. The Centrex IP Client Manager
(CICM) is a high-availability, NEBS-compliant platform that is
hosted from a DMS-100/500. The CICM is responsible for hosting
518 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
IP telephones or PC Clients and serves as a VoIP gateway when
used with the DMS-100/500 switch, enabling delivery of Centrex
IPs rich feature set. An alternative implementation uses a
Communication Server 2000 softswitch.
Communication Server 2000. The Communication Server (CS)
2000 is the primary network intelligence component for the Nortel
carrier VoIP solution. A superclass softswitch delivering market-
differentiating local, long distance, and tandem services, the CS
2000 provides high-capacity, centralized call processing and
service transaction logic, including translations, routing control,
network signaling, and the creation of billing records. The CS 2000
also directs access gateways to establish and tear down virtual
connections for delivery of packetized voice and data traffic over
the packet network including support of IP phones.
i2002/i2004 Internet Telephone. The i2002/i2004 Internet
Telephones are full-featured IP business sets with outstanding voice
quality. Key features include an LCD screen with supporting soft
keys for user configuration and operational status. A high-quality
speakerphone is integrated in the i2002/i2004. The i2002/i2004
phones connect to a LAN using a standard RJ-45 interface. A LAN-
powered option simplifies wiring and provides uninterrupted phone
service in the event of a power outage. The i2002/i2004 phones
allow an Enterprise to balance user requirements for high
functionality with the benefits of streamlined management and
reduced facilities costs. The i2002/i2004 phones are supported by
the Nortel Enterprise portfolio; thereby, ensuring investment
protection as business needs change.
Key take aways
Centrex IP delivers over 200+ Centrex features over an IP infrastructure.
Centrex IP is identical to the hosted Centrex service of today, except that
services can be delivered over IP as well as through circuit switching. Now
businesses can migrate gradually from their existing Centrex service to
voice over IP, at their own unique pace. To our knowledge, no other hosted
service has this capability. Customers get the best of both worldsthe
advantages of a gradual migration to VoIP communications with no
disruption of their current features, dial plan and billing.
With Centrex IP from Anycarrier.com, Enterprises can perform the
following functions:
Provide uniform services and features for all users regardless of
location
Extend the same advanced business features to Centrex IP and
Centrex users
Chapter 22 Real-Time Carrier Examples 519
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Support uniform dialing plans and abbreviated dialing
arrangements on a company-wide basis regardless of location
Avoid toll charges for calls between different company locations
Administer one large Centrex group rather than several separate
groups
Integrate with current network-hosted solutions such as Voice Mail
and IVR
For the Enterprise, Centrex IP brings the advantages of new technology
with all of the traditional advantages of a hosted solution:
Minimum capital investment required
Limited information technology resources required for VoIP
upgrade
High availability of mission-critical telephone service
No risk of technology obsolescence
Strategic advantages and flexibility of outsourcing
Local
As wireline revenues continue to decrease, local service providers are
looking for ways to reduce costs and converge networks so that voice, data,
and wireless can leverage the same network. They also want to lay the
foundation for new services that provide additional revenue opportunities.
For these reasons, major local exchange carriers are taking on the challenge
of converting their Class 5 circuit switches to packet switches.
Drivers for considering migrating to a packet network vary by service
provider. However, common requirements are as follows:
Meeting market demands for data services
Delivering solutions cost-effectively
Providing single, integrated, carrier-grade packet network for voice,
high-speed data, and special services with efficient network
management capabilities
Reducing capital and operating costs
Finding sources of new revenue
In the end, many service providers determine that it is more cost-effective
to migrate to new packet technology than to grow and maintain their
existing circuit switches.
520 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Technical challenge
Service providers are closely reviewing their technology choices in order to
decide whether to continue to grow and maintain the circuit-switched
network or migrate to a packet network.
Most service providers have a wide range of circuit switching and back-
office equipment, requiring experienced craft personnel to maintain and
manage circuit switches from different vendors. From a network support
perspective, there are many different products to understand and manage
on a daily basis. In addition to the various and dated circuit switching
equipment in networks today, some switches do not support Local Number
Portability (LNP), a regulatory requirement that must be provided to all
subscribers. In addition, capital expenditure decisions are looming for
many service providers.
From a business perspective, packet networks can deliver operational cost
savings. For example, migration can reduce the number and different types
of back-office systems, simplifying network management. In addition, the
number of nodes in the network is decreased; in one real-world case, by
almost 75 percent. Several Class 5 switches can be collapsed into one
centrally located communication server that serves a much wider
geographic area. Craft personnel no longer have to know and manage
multiple types of back-office systems. In addition, they no longer have to
manage as many elements because separate layers for Tandems and
Remotes, as well as multiple networks, are eliminated.
Solution
Network diagram
From a high level perspective, convergence offers an opportunity to reduce
complexity and to reduce costs over traditional TDM Class 4 and Class 5
networks. As the diagram below illustrates, the transition from a TDM-
based network to a packet-based (IP or ATM) call server network
significantly reduces the number of trunks that have to be maintained, as
well as reducing the overall load on the network.
Chapter 22 Real-Time Carrier Examples 521
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Figure 22-2: Local network diagram
Solution Details
Nortel provides a Carrier Voice over IP Local Solution. Whether a service
provider is interested in ATM or IP, Nortel can provide the solution. With
decades of Class 5 experience, Nortel is uniquely qualified to provide
service providers a packet solution that can evolve their circuit networks.
This solution delivers full feature transparency delivering a full set of Class
4 and Class 5 features with over 3,000 features in every software load.
The major components of the Nortel VoIP Local Solution include:
Communication Server 2000 superclass softswitches providing
comprehensive services, carrier-grade attributes, and regulatory
features
Media Gateway 9000, a line gateway supporting both broad- and
narrowband services
Media Gateway 4000, a trunking gateway used in ATM networks
(North America only)
Packet Voice Gateway 7000 or 15000, a trunking gateway used in
ATM or IP networks
Service providers may also use Nortel Multiservice Switches to provide
high-capacity, carrier-grade switching. While this is not a requirement
Radically Simplified Networks
Packet SuperClass
2 Call Servers
10 Trunk Groups
15 Office Nodes
120+ Trunk Groups
90% Fewer Trunks
40% Fewer Call Attempts
Tandem Layer Absorbed
TDM Class 4 & 5
EO
EO
EO
EO
EO EO
EO EO
EO EO
EO
EO
Packet
Network
IXC
#1
IXC
#3
IXC
#2
IXC
#4
E
E
E
E
IXC #1
IXC #2
IXC #3
IXC #4
EO
EO EO
EO
E
E
E
E
TANDEM
TANDEM TANDEM
522 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
(because Nortels solutions are interoperable with a wide range of other
vendors equipment), many service providers have chosen Nortel
Multiservice Switches because this choice gives them the convenience of
working with a single vendor.
Most local service providers need to support a variety of different line
types (coin, key, and PBX) as well as a variety of different phone types
(traditional residential phones, business sets, and IP phones). In order to
keep evolution of the network inconspicuous to the end user, Nortel
Communication Servers offer full-feature transparency of 3000 telephony
services, ensuring that the same voice features available today from a
circuit switch can be offered in a packet network. This means that service
providers dont have to worry about losing the ability to offer popular
features. In addition, the functionality of voice features does not change,
eliminating the need for end users to relearn use of these features.
Finally, the packet network enables local service providers to expand into
new markets, incorporating an edge-out strategy into other urban areas.
Key take aways
Nortel delivers a turnkey Local VoIP Migration solution, minimizing the
number of service provider personnel required to support the migration.
This turnkey solution includes:
Global professional services to assist with network planning,
engineering and design
Installation and activation of packet equipment
Performing physical cutover
Delivering training and documentation
Service providers around the globe are deploying the Nortel VoIP Local
Solution. Two major North American service providers have made public
announcements regarding their Voice over IP plans: Verizon* has
announced plans for deployment of the Nortel VoIP Local Solution, and
Sprint* has live offices already converted to packet. Other service
providers in the Caribbean, Latin America, and Asia Pacific have also
announced their intention to deploy the Nortel VoIP Local Solution.
Long distance
Long distance providers and new carriers alike are faced with a paradox:
the total minutes of use for long distance services is expanding, but the
revenues per minute are decreasing. However, there is still tremendous
potential in this market, creating opportunities for some and challenges for
others. Ascendant carriers are staying ahead of the competition by reducing
operating costs, expanding capacity, and delivering reliable services.
Chapter 22 Real-Time Carrier Examples 523
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Nortel Carrier VoIP Long Distance Solution is an ideal step for long
distance service providers. This low-risk solution helps lower transport/
transit and capital costs with the efficiencies of multivendor packet
telephony. Packet trunking is the economical engine that can help pay for
network transformation today as the service provider explores new revenue
opportunities enabled by new voice and multimedia services and easier
access into other markets.
Technical challenge
This solution delivers full-featured, carrier-grade telephony, data, and
multimedia services over multiservice packet networks. It uses open
standards packet technology for the packet backbone. Carriers can chose
either AAL2 protocol or IP transport to provide a full-featured packet
transit application.
Packet networks offer cost efficiency, open standards, and fast time-to-
market for new packet services, without compromising the values of
traditional telephony, including service richness, voice quality, reliability,
scalability, and manageability.
This solution is based on Nortels Packet Trunking application and allows
service providers to deploy their own differentiating telephony, data, and
multimedia services.
This solution also lays the foundation for delivery of local and transit
services for business and residential customers with the future addition of
line-side multiservice gateways. The service provider can also add cable or
wireless gateways to explore other market opportunities, or take advantage
of Enterprise network connectivity and SIP capabilities to deliver new
services.
Solution
Network diagram
From a high level perspective, the transition to an IP or ATM-based packet
network can result in large savings in long distance costs. The diagram
below shows a comparison of long distance trunking requirements for a
TDM-based trunk network and a packet-based network. The larger the
number of flows over a link or Trunk Group, the less difference between
statistical worst case and average. Because in the IP case we consider all of
the traffic on a link when sizing the link and do not have to segment it up
into a smaller point to point flows as in the TDM case (that is, individual
Trunk Groups, each sized according to a statistical worst case), it can be
more efficient.
524 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 22-3: Long distance network diagram
1
Solution Details
Instead of managing multiple overlay networks, the service provider can
deliver all types of services over a single infrastructure. This design allows
more choices in service deployment and vendor selection to help decrease
long-term capital costs. H.248-compliant multiservice gateways connect
existing trunks to the backbone, with no need to modify existing facilities
or their originating multivendor offices. The efficiencies of a packetized
backbone can reduce ongoing operating costs by twenty to forty percent as
proven by a Nortel business case
1
. In contrast to today's individually
engineered fixed-bandwidth trunks, a packet network efficiently routes all
types of traffic by allocating and sharing network resources on demand.
The packet network also helps reduce cross-connects, multiplexers, IMT
facilities, and associated peripherals, reducing capital expenses by twenty
to thirty percent, again as shown in a Nortel business case
1
.
1. Please refer to the Disclosure Notice in the Section VI Introduction (page 498).
Long Distance Savings
20K
20K
20K
20K
20K
20K
20K
20K
20K
20K 20K
20K
O 4 Nodes
O 60K Interoffice Trunks per node
60K * 4 Nodes = 240K Trunks
O 40K Access Trunks per node
40K
40K 40K
40K
40K
40K
40K
40K
35K
35K
35K
35K
DMS
DMS
DMS
DMS
O 2 Nodes 2 Nodes Collapsed
O 70K Interoffice DPT Trunks per CS
2000
- 70K * 2 Nodes = 140 Trunks
- 80K trunks reduced via collapse
- 20K trunks reduced using
shared DPT trunks
58% Reduction in Ports
O 80K Access Trunks per CS 2000
No Net change in Access Trunks
Chapter 22 Real-Time Carrier Examples 525
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
This solution also offers reliability, security, and quality of service. The
service provider can transition a node-centric, hierarchical topology to a
simplified architecture where a converged network performs like a single,
unified switch. The streamlined network design offers greater service
capacity, variety, and speed-to-market, all with fewer nodes. And because
fault-tolerant elements are distributed across the network, single points of
failure are removed and superior survivability is realized, with no sacrifice
in voice quality, latency, or capacity.
The Carrier VoIP Long Distance solution offers a standards-based
switching and routing infrastructure that transports today's revenue
generating services while supporting competitive, next generation services
all over a high-capacity ATM or IP backbone. The service provider can
deliver leading-edge long distance applications while reducing transport
costs and deferring or eliminating future capital expenses.
The following listing summarizes key network elements in this solution.
All are built to meet or exceed carrier-grade standards and protocols set by
ITU, Telcordia, ANSI, ETSI, IETF, ATMF, and other standards bodies.
Gateways. With an ATM AAL2 backbone or IPthe robust Packet
Voice Gateway 15000 connects standard TDM trunks to the service
provider's packet backbone. This trunk gateway appears as a
tandem/transit office termination to any vendor's circuit-switching
office.
Superclass Softswitch. The first vendor to deliver a superclass
softswitch, Nortel offers a choice of two platforms; the
Communication Server 2000 and Communication Server 2200
(previously the Communication Server 2000 Compact). A
superclass softswitch offers the critical attributes associated with
successful softswitch deployment, such as consolidated local, long
distance, wireless, and cable applications; comprehensive service
(3000+ features); regulatory capabilities; and carrier-grade
attributes. Both of these platforms are designed to control
multivendor gateways and provide the call control and other
network intelligence required to deliver revenue-enhancing
services.
Multiservice Switches (MSS). As a high-capacity ATM switch, the
Nortel Multiservice Switch 15000 supports ATM IP, frame relay,
circuit emulation, and voice services. This system scales from 40
Gbps of redundant bearer capacity to terabits. High-capacity, fault-
tolerant Layer 2 switching/routing is provided by the Ethernet
Switch 8600, which aggregates local IP traffic providing an
interface to a high-speed optical backbone for IP voice and
signaling traffic.
526 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Signaling Network Interface. With the Universal Signaling Point,
the network fully communicates with PSTN SS7 messaging for
seamless interworking with established networks and Intelligent
Networking services. This server supports both traditional and next-
generation packet-based services.
Audio server. In a small footprint, the Media Server 2000 series
provides audio resources including announcements, dynamic audio
recording, conferencing, speech recognition, lawful intercept, and
more.
Management. A comprehensive set of network management tools
and applications work together seamlessly to provide both element
and network management.
Key take aways
The service provider who is already delivering long distance services can
incorporate current facilities to extend the service life of current
investments. The service provider who is building a new network begins
with a carrier-grade, high-capacity, future-ready network that offers
immediate revenue opportunities to power future growth.
The Carrier VoIP Long Distance Solution helps create successful futures
for carriers by laying the foundation for delivering packet-enhanced
services in other markets. And this solution enables profitable, high-
performance packet trunking with the full traditional voice service set that
service providers demand and have come to expect.
Multimedia
In the past, communications were based on a single media: voice. In the
21st century, communications require the integration of multiple media. To
ensure effective communication for both Enterprises and end users, next-
generation SIP services allow consumers to integrate voice, video, and data
into a conversation as simply an easily as they pick up the telephone and
make a simple voice call today.
Technical challenge
For the purpose of this example, we have chosen to focus on a service
provider who is solely deploying multimedia services for the residential
and small office/home office markets. Similar multimedia service offerings
are available for medium to large Enterprises through a carrier-hosted
model.
To take advantage of this market opportunity, 123com has initiated a
program to design and deploy unique consumer multimedia service
offerings that create opportunities for new revenue streams, increase
Chapter 22 Real-Time Carrier Examples 527
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
penetration for broadband and primary line telephone services, improve
customer retention, and generate subscriptions in new territories.
Solution
Network diagram
The network diagram below illustrates how five Go to Market services
can be offered. These services are as follows:
Voice and multimedia over a broadband connection to the Home
Office
Voice and multimedia from a soft client over a broadband
connection to the residence
Voice and bundled long distance over a broadband connected
Integrated Access Device (IAD) in the residence
Personal Agent web portal services to enhance existing 123com
residential voice subscribers
Voice and multimedia Remote Access over a broadband connection
to the Internet
Each of the service offers makes use of 123coms or other carriers high-
speed data services, the public Internet, 123coms IP Backbone, and
123coms PSTN network.
528 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 22-4: Multimedia network diagram
Solution Details
For the residential consumer market, 123com considered four service
offerings, described below.
123com Multimedia Communications Center. Intended for the
installed base of broadband customers and telephone customers,
this service offers advance multimedia features such as video
calling, picture ID, file transfers, and Web pushes using the
customer's personal computer. Unlimited on-net calls are provided
as part of the service, along with an optional outbound long
distance calling plan. Inbound calls from the traditional telephone
network will be allowed in the markets where 123com has primary
line service or selected states where this type of service is allowed
from a regulatory perspective.
123com Broadband Telephone Service. Intended for the installed
base of broadband customers who dont use or have a personal
Chapter 22 Real-Time Carrier Examples 529
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
computer, this service provides access through existing telephones
with an adapter that plugs into the cable modem. Unlimited on-net
calls are allowed, with an optional outbound long distance calling
plan. Inbound calls from the PSTN will be allowed in the markets
where 123com has primary line service or selected states where this
type of service is allowed from a regulatory perspective.
123com Broadband Home Office Package. This service greatly
increases the productivity of home and small office
communications. In addition, the bundling of an i2002 or i2004 IP
phone with PC-based software provides a seamless system that
works together. In standalone mode, the IP phone provides always
available communications, even when the PC is turned off. In PC-
controlled mode, the IP phone provides the voice to be used with
the PC services.
123com Home Office Multiline Service. Intended for the installed
base of 123com telephone customers, this service provides find me/
follow me services through a Web-based client. There is no
software to download onto the home computer and a broadband
connection is not required to access this feature. Subscribers simply
use the Web client to set up call screening and routing preferences.
The services described above are delivered by the Service Core, comprised
of the components of the Multimedia Communication Server (MCS) 5200
and (optionally) the Communication Server 2000 (CS 2000). PSTN
trunking can be provided through the Packet Voice Gateway providing
trunk gateway functionality in some markets and the MCS 5200 PRI
gateway in markets that arent currently being served by a CS 2000. These
gateways are used to provide the interconnection of packet and TDM
networks, and services provided to the residential, SOHO, and small
business Internet users.
Key take aways
This migration solution offers consumer multimedia service offerings that
create opportunities for new revenue streams, increase penetration for
broadband and primary-line telephone services, improve customer
retention, and generate subscriptions in new territories.
Cable
While data and video on the Internet aren't new, having voice included in
the mix is. Using Internet Protocol (IP) telephony or Voice over IP (VoIP)
technologies, phone conversations are converted into packages of data to be
sent over the Internet in ways similar to e-mail and web sites. Phone calls
anywhere in the world can be significantly less expensive with IP
telephony. And because voice, data and video are all using one network,
packaged as similar data packets, services can be bundled together from
530 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
one service provider with the flexibility for the user to receive the
information on any communications device, regardless of locationoffice,
home, or on the road.
According to market research on communications preferences completed
by Pollara Inc. for Nortel, consumers are growing impatient with various
communications devices that don't work together. The research found that
today's consumers expect instant communications but instead are plagued
by having to navigate cumbersome menus on each device they own when
they want to reach someone.
Traditional cable providers are aggressively moving into service areas
formerly dominated by traditional voice providers. The incumbent service
provider is faced with a two-fold challenge: find a way to generate new
revenues, and curb subscriber flight to satellite service.
The current opportunity for cable service providers implementing VoIP
technologies is to simplify communications while, at the same time,
creating a user friendly communications environment that seamlessly
adapts to the lifestyle and needs of each individual. The technology that is
being used with VoIP to give businesses and consumers a wider range of
services and the ability to fine tune the management of their
communications is called Session Initiation Protocol (SIP).
Technical challenge
When making the decision to enter the VoIP business, the service provider
should be aware that VoIP creates more than one business opportunity. A
good part of the reason lies in the underlying technology. The concept is to
convert an analog voice signal into a series of ones and zeros that can be
reconstructed into the original analog format without perceptible loss of
quality. Once any information is converted into this digital form, all the
services developed for data switching, routing, and storage become
available as tools to tailor voice product for the service provider's market.
One of these tools is IP, which is the underlying technology used to move
information on most data networks, including the public Internet.
VoIP opportunities for cable include primary line telephone service, long
distance, SIP-based broadband voice, and business services. This section
will concentrate on primary line service.
Primary line service is a one-for-one replacement of the incumbent
telephone company's service, because it is carrier-grade telephony. This
means it must be highly reliable, scalable, feature-rich, maintainable
without service outage, and include the ability to track and measure key
performance metrics. Most importantly, carrier-grade means a quality of
service (QoS) for end-to-end transmission that keeps voice quality within
the levels expected by consumers for commercial telephone service.
Chapter 22 Real-Time Carrier Examples 531
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Solution
Network diagram
From a high level perspective, the same packet network can be used to
deliver multiple media to wireline customers including voice, video and
data. The diagram below illustrates connection of a call server to a Cable
based customer for providing voice data, and video services.
Figure 22-5: PacketCable
TM
end-to-end VoIP architecture
Looking at the Nortel cable network solution in slightly more detail,
multiple services can be offered over the same core network to reach a
wide variety of users.
Embedded
MTA
Cal l Management Ser ver
IP
pac ket net wor k
Rout er CMTS
PSTN
Medi a Gat eway
HFC
Headend
Embedded
MTA
Cal l Management Ser ver
IP
pac ket net wor k
Rout er CMTS
PSTN
Medi a Gat eway
HFC HFC
Headend
532 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 22-6: Nortel cable VoIP Solution
Solution Details
There are several options for using VoIP as the underlying technology for
primary line technology. The diagram at the end of this section shows the
architecture detailed by the CableLabs PacketCable
TM
specifications for
end-to-end VoIP, which is being deployed today. In this scenario, a standard
subscriber telephone connects through existing phone wiring to a new
Embedded Multimedia Terminal Adapter (E-MTA) that may be located on
the side of a home or within the home, depending on the packaging. The E-
MTA does the analog-to-digital conversion and packetizing functions, and
its embedded cable modem communicates with a Cable Modem
Termination System (CMTS) at the headend.
The network side of the CMTS typically includes the ability to route
signaling and voice packets. Signaling packets are exchanged with
softswitches in the service provider's network to set up and supervise the
call. Voice packets are routed to the called party through a packet network
to a remote CMTS or the PSTN via a media gateway. In addition to
handling call setup and supervision, the Call Management Server (CMS),
which at Nortel we consider a communication server, is the source of
revenue-generating subscriber features.
Chapter 22 Real-Time Carrier Examples 533
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
The stringent requirements of carrier-grade primary line service demand a
two-way network with a robust set of network elements that enables
increased end-to-end QoS and security. Nortel has worked with cable
operators since the mid-1990s to satisfy this need with HFC-based cable
telephony. DMS circuit switches currently provide TDM-based carrier-
grade telephony to 4.9 million cable subscribers. The call processing and
feature code that serves the DMS switch has been ported to Nortel
softswitches as the basis for its cable VoIP offering: the Communication
Server 2000 (CS) and the Communication Server 2000 Compact. The CS
2000 is unique because it supports both VoIP and TDM circuit-switched
telephony. A network using these elements is shown in the second diagram.
The Multimedia Communication Server 5200 can be integrated with the
above solution to add new multimedia features to the voice services that
can be offered by the cable provider. By adding multimedia services to the
voice and video bundle, cable operators can increase their revenue
opportunities by offering multimedia services that market research by
Pollara, Inc. has shown that residential customers wantand are willing to
pay for. These services simplify and enhance the consumers
communications experience by consolidating multiple communications
devices into one number and one mailbox. And, they enable greater control
over the end-users communications by automating repetitive tasks.
Key takeaways
Nortel can help the service provider make the transition. Nortel has a rich
history in telephony and a proven record in VoIP for cable, as well as an
installed base of more than 140 DMS switches carrying 4.9 million cable
telephony lines globally.
On the VoIP front, Nortel has been active in PacketCable
TM
interoperability work at CableLabs. Its softswitch is PacketCable
TM
qualified, and it has had a visiting engineer on-site at CableLabs for years.
Several cable operators have deployed Nortel VoIP solutions, giving it real-
world experience.
In addition, Nortel recognizes that VoIP networks come in all sizes and
should support all service types, and offers two versions of its carrier-class
Communication Server (CS). The CS 2200 (formerly known as the
Communication Server 2000 Compact) softswitch occupies a smaller
footprint, consumes less power, and is built on a commercially available
Compact PCI platform with open base software architecture. The CS 2000
is built on the DMS XA-Core multiprocessor platform. Both of these
platforms provide cable operators with the same powerful ability to deliver
local, long distance, and tandem VoIP services on a single platform. Both
platforms deliver the same applications, protocols, and functionality by
fulfilling the softswitch promise: software functionality independent of
hardware platform.
534 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Nortel Softswitch portfolio delivers unparalleled scalability and reliability,
with the critical attributes that guarantee a successful migration: strategic
architecture, regulatory features, carrier-grade attributes, and
comprehensive services.
Broadband
The broadband market in North America continues to be a dynamic sector
as the competitive landscape and consumer demand for new
communication services continue to evolve. Driven by the need to find new
sources of revenue, service providers are looking for ways to unleash the
potential of broadband networks.
Nortel understands that wireline service providers need to deliver value-
rich service bundlesservices such as VoIP, Multimedia Communication
Services (integrated voice, video, and data), broadcast and IP-video
(television), and data services. Our next-generation broadband solutions
are ultra-broadband ready, meaning that they have the high bandwidth,
Quality of Experience attributes needed to deliver the new triple play
service set (voice, data, and video).
Wireline carriers are losing customer ownership as their strategic position
slips. Cable competitors are targeting their customers with value-priced and
value-added alternatives to basic phone services. If the wireline carriers are
to survive and thrive, real service differentiation is needed.
Technical challenge
According to The Yankee Group*, significant capital spending will occur
in the broadband access market in the next four to five years
approximately US$5 billion annually. This longer term spending trend is
being driven by a need for service providers to replace much of the existing
broadband equipment with a newer generation of infrastructure that is
capable of supporting a triple play business model.
Solution
Network diagram
The Broadband market has a wide range of technologies, all of which can
leverage a packet-based network to provide voice or other services. The
following diagram illustrates some of these technologies including voice
service through a traditional copper loop from a central office, voice
service through a Digital Loop Carrier (DLC), voice and other services
through Digital Subscriber Loop (for example, ADSL), voice and other
services through Fiber to the Curb (FTTC), and Voice and other services
through Fiber to the Home (FTTH).
Chapter 22 Real-Time Carrier Examples 535
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Figure 22-7. Broadband market technologies
Solution Details
Nortel has significantly expanded its Broadband Networks portfolio to
enable traditional wireline service providers to deliver a new set of value-
rich, revenue-generating services to consumers and small-to-medium
business customers over a high-bandwidth, ultra-broadband infrastructure.
Nortel Broadband Access Solutions couples best-in-class access products
from strategic alliances with a world-class portfolio of voice, date, and
transport products.
Strategic alliances provide a complete range of access products including
the following:
DSLAM, PON, and Mini-RAM products from ECI* Telecom
Multiservice Broadband Loop Carrier products for the North
American market from Calix*
Multiservice access products for the European or ETSI market from
KEYMILE*.
This powerful combination of new and existing products enables service
providers to deliver high-value, revenue generating services through a
reliable and scalable broadband infrastructure.
Nortel Broadband Fiber Solutions are based on leading Optical Access
technologies such as Fiber-to-the-Premise, Curb, or Business (FTTx)
utilizing PON technology. These powerful access products offer the
convergence of voice, video and data services over a single fiber
infrastructure, thereby delivering ubiquitous and seamless solutions and
eliminating the network bottleneck. Features include future proof and full
service set offering, reduced Capex for full service set network, reduced
536 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Opex for broadband access network, and complementing DSL-based
solutions.
Broadband Fiber Solutions enable service providers to offer customers
triple play services within high-bandwidth service packages that include:
multichannel TV, video-on-demand, high-speed Internet access, multiple
voice lines, games-on-demand, e-mail and more. Nortel is teaming with
other industry leaders, such as ECI Telecom, in bringing technology to the
forefront of this converged network transformation.
The Broadband Fiber Solutions series offers the ability to support the
modular combinations of POTS, Ethernet, xDSL, T1/E1, and RF interfaces
to address any desired service mix, while enabling services that cost-
effectively grow, as the network evolves. Further, Nortel Broadband
Copper Solutions for wireline applications are comprised of next-
generation broadband DLCs, DSLAMs, and mini-RAMs.
Broadband Access Services Gateway 7700. Optical Line
Termination (OLT) that supports all FTTx types along with point-
to-multipoint and point-to-point configurations with multiple
subscriber interfaces all on the same platform, in any combination,
and without restriction.
Nortel Broadband Copper Solutions comprise next-generation, ultra-
broadband DLCs, DSLAMs, and Mini-RAMs that are capable of
delivering high bandwidth services such as triple play and multimedia
communication services. Together with Nortels existing voice, data, and
transport products, these broadband access products can be combined to
form complete end-to-end solutions, or as a subset of solution bundles, to
meet the needs of individual service providers. Nortel Broadband Copper
Solutions offer wireline carriers the revenue-generating broadband service
infrastructure they need to enhance long-term competitiveness.
These Broadband Copper Solutions include products offered under the
Nortel brand as well as products available through strategic alliances.
Broadband Access Services (BAS) Gateway 7700 & 7500, Nortel
BAS Gateway 7700 is our core, full-size DSLAM product,
delivering up to 960 subscriber lines, including all varieties of DSL
and fiber, from a single shelf. The BAS Gateway 7700 is suitable
for mass deployments from central offices or from environmentally
controlled street cabinets. These products are built upon
technologies from a strategic alliance with ECI Telecom.
C7*. The C7 is a Multiservice Broadband Loop Carrier (MS-BLC)
that fits into existing networks, integrating with traditional
operations and support systems. It also supports both fiber and
copper-based services while providing a bridge for the delivery of
packet-based broadband services to every subscriber.
Chapter 22 Real-Time Carrier Examples 537
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
KEYMILE UMUX 1500*. For markets requiring adherence to
ETSI standards, the KEYMILE UMUX 1500 is a carrier-grade
multiservice access platform, offering a wide range of services and
support for different technologies (ATM, IP, SDH & PDH) on a
single platform.
Figure 22-8:The Nortel Ultra Broadband portfolio
Key take aways
Nortel is delivering Ultra Broadband solutions and services that enable the
future success of service providers through:
Elimination of stranded investments through scalable and evolvable
network infrastructure
Future-proof capacity for new services and unplanned
bandwidth requirements
Support copper and fiber deployment scenarios from same
platform
Enabling of services innovation by providing the capability to
deliver new range of triple play and multimedia services
Carrier VoIP and DMS-10 integration to provide VoIP services
to residential subscribers
Features to enable video/TV services to the home
End-to-end networks that are services manageable and network
manageable
538 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Services management provides the ability to streamline the
process of turning up new services through automation towards
the goal of zero-touch provisioning
Integration of the Calix EMS with the Nortel Integrated
Element Management System (IEMS) platform
End-to-end solutions that are highly secure and reliable
Porting security expertise to Nortel access solutions
Working to minimize denial of service attacks
Rigorous modeling and testing of solutions to ensure reliability
Conclusion
Network convergence
The varied networks that exist today have evolved in parallel, and each
offers important attributes of its own. TDM networks are reliable, secure,
easy to use, and optimized for voice traffic. Data networks are efficient,
scalable, and optimized for packet traffic. Wireless networks are
ubiquitous, convenient, and optimized for mobility. The solution is to
transform these networks to maximize profit and market share without
sacrificing the valuable attributes of each. These dual goals can be achieved
by migration: transforming traditional networks into packet-based
networks that can offer all of the features and attributes of traditional
service in a simplified, cost-effective, service-enabling manner.
Nortel offers a broad and deep portfolio of services that fully leverage the
packet-based networks that service providers need to build. Service
categories are as follows:
Data networking services such as Virtual Private Networks
Mobility services such as voice over Wireless LAN
Integrated voice, video and data
Personalization of services to include content delivery and security
Next-generation residential and business services
Optical broadband services such as Storage Area Networks
(SANs), and optical Ethernet
Carrier-grade reliability
Nortel has a strong track record for service delivery that spans decades of
innovation while providing customer support with the implementation and
maintenance of these service-bearing networks. Carrier-grade service is
dependable, secure, and evolvable. We understand the importance of these
attributes, and support our customers in maintaining them.
Chapter 22 Real-Time Carrier Examples 539
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Our efforts toward convergence are fully supportive of what service
providers want to do with their networks. Networks have been built in
stovepipes over many years, each network optimized to deliver one type
of service: voice, data, mobile, public, private, virtual, or dedicated. The
packet technology available today enables providers to converge more and
more service types on a single packet core network, resulting in greater
efficiency, lower operating costs, and the potential for new service
revenues. Nortel has a comprehensive packet-based portfolio today that
supports convergence of all types.
Nortel investments are focused on the high value control points in the
network (services edge, VoIP, and converged core) and on aggregating this
value to create multimedia multiservice broadband networks that feature
packetization, mobility, integrated multimedia services and applications,
and broadband networking.
Nortel is creating new value for the service provider by performing the
following actions:
Bringing carrier-grade scale and reliability to VoIP and carrier
capabilities to the Enterprise
Continuing to deliver the mobility value of wireless networks while
extending it to all networks
Delivering platforms to facilitate rich integrated multimedia
services
Creating common reference architecture across the product line to
facilitate convergence
Driving cost performance and scaling of services
Service intelligence
Nortel understands the capabilities and constraints in the transformed
network and how they need to be used, and can add new capabilities into
the network quickly, to give service providers the edge in offering new
services. This capability is called Service Intelligence, and it guarantees
network performance based on the end user's requests, network resources,
and service application needs. Service Intelligence allows IP networks to
move beyond best effort and dynamically adapt to customer
requirements.
Given convergence and higher performance levels, the next challenge to
value-rich service is resource allocation. The transformed network must
make intelligent use of resources through policy, authentication, billing and
QoS. When this is in place, the service provider can deliver with
confidence the services that users demand.
540 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
The transformed network
All of these pieces fit together to create the transformed network. It is
simplified through convergence, robust because it retains carrier-grade
attributes, and more controllable through Service Intelligence. The result is
a network that delivers value-rich services, lowers costs, and builds the
service providers brand.
The market in which service providers operate in has changed. End users
have increasingly challenging requirements to meet, and service providers
are beset by nontraditional competition and margin erosion.
The solution lies in transforming the network to lower costs, enhance
performance, and enable new services.
Nortel is ready to help carriers achieve the twin objectives of converging
their networks and delivering value-rich services that enhance customer
loyalty and increase revenue. Through our experience, market leadership,
broad and deep product portfolio, and partnerships with value-add vendors,
we can help the service provider meet and conquer both today's challenges
and tomorrow's.
The examples provided in this chapter illustrate how Nortel can help
service providers transform their networks to meet today's market and
service requirements, and thrive in the new telecommunications
marketplace.
References
IDC, U.S. Hosted IP Voice: Market Analysis and Forecast 2002-2007.
Author: Thomas S. Valovic, IDC Study No. 28803, released January, 2003.
InfoTech, Enterprise Convergence: The Race for IP Telephony Supremacy,
report released April, 2003.
541
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Chapter 23
Private Network Examples
Stphane Duval
Tim Mendonca
The data solution
The first part of this chapter describes the data solution of this private
network example.
Introduction
The purpose of this chapter is to demonstrate the ability to satisfy customer
requirements for real-time, converged networks with the technologies
discussed in this book using Nortel products as the example.
The focus is voice over a data infrastructure. It is not the intent of this
section to describe and explain data routed and routing protocol standards.
Also, the example will focus on the Headquarters that incorporates all
aspects of deployment. By adjusting the scale of the deployment, the
solution can be adapted to all sizes of organizations.
This example is by no means the only approach that can achieve the needed
QoE results. Used as a model, it can be adapted to develop a custom
solution based on unique organizational needs. A large variety of
interchangeable products from Nortel create limitless deployment options
for converged infrastructures.
Starting with the definition of the four types of convergence, Quality of
Experience (QoE) and Quality of Service (QoS), the identification,
definition, categorization, and characterization of a set of Solution Design
Attributes (SDA) to address different aspects of a solutions architecture, a
common convergence vocabulary will be established. The goal is to
systematically gain a clear understanding of issues that arise in Enterprise
data networks when deploying real-time applications and learn analysis
and design techniques to ensure a high level of customer satisfaction and
network performance.
Getting Started
Business success in moving to a converged network will rely on knowing
the underlying criteria affecting the current state of your infrastructure.
This knowledge assists in the development of processes to evolve your
542 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
existing network capabilities in order to achieve performance levels
adequate for real-time applications.
Business drivers for convergence
Many Enterprises embark on a convergence path to lower costs, but the
benefits extend beyond an improved total cost of ownership. In fact, the
productivity gains and competitive benefits that are realized through an
integrated network of voice, video, data, and applications quickly changes
a tactical decision that an Enterprise makes to a strategic decision that helps
the Enterprise run its business better and improve its customer service.
Scope the project
Based on the convergence capabilities you intend to implement, start by
defining the scale and scope of the project, in terms of the number of users,
number of sites, and the types of applications you will run. What is the end
result or service level you are trying to deliver to the end-users?
Identify implications on organizations with whom you partner for success.
It is absolutely critical that any convergence projects be undertaken with
the inclusion of, and in partnership with, the various IT departments. In
addition, identify the impacts on operations personnel and business units
that will benefit from improved employee productivity and enhanced
customer service. Assess the impact on human resources and the security
organization, since they will need to ensure IP Telephony and multimedia
are integrated into the Enterprise security policy.
Current state of the network
Looking at the communications services your Enterprise provides, you
need to determine the current state of the telephony and data
infrastructures. From a network infrastructure point of view, a thorough
understanding of the capabilities of the network backbone and the
technologies used to transmit voice, data, and video on the network is
needed. Will a phased approach be possible or will a forklift upgrade be
necessary? An inventory of your network capabilities will help you make
an informed decision.
Using design attributes to assess how your current infrastructure addresses
such issues as latency, jitters, and reconvergence mechanisms (spanning
tree, Split Multilink trunking) is crucial to the success of this type of
project. The goal is to achieve the development of a clear and accurate
evaluation of the infrastructure. Failure to achieve this result will ultimately
tarnish the QoE end-user expectations upon completion of the project.
Transition risk assessment
A number of business parameters affect the roadmap for a particular
Enterprise. It is important to consider the impact that moving to a
Chapter 23 Private Network Examples 543
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
converged network will have on business continuity, organizational
dynamics, security, and scalability. Fundamentally, you need to evaluate
the level of risk you are willing to accept in transitioning to convergence.
This evaluation will help you determine a time frame to reach network
convergence that makes sense for your business.
Business continuity
During the transition to a converged network, you will need to consider
what measures you can take to ensure the continuity of external services for
customers and the availability of communication applications needed by
employees, suppliers, and partners. Plan disaster recovery and redundancy
for mission-critical operations that are essential for conducting business.
Organizational dynamics
Moving to a single network that carries voice, data, and video traffic will
necessitate a new IT paradigm for managing the unified infrastructure.
Consolidated management policies will be needed, as will a redefinition of
roles and responsibilities of network management personnel previously
aligned with either the voice or data side of the Enterprise. During the
transition period, you need to consider that there will be real costs
associated with personnel realignment and retraining activities. Assess the
impact that moving to converged applications will have on how employees
carry out their day-to-day tasks.
The importance of Network Health Check (NHC)
When implementing any real-time network, a Network Health Check
(NHC) should be conducted before implementing service. This is a wise
step for not only existing networks but for those that are brand new. A NHC
will potentially help identify a number of anomalies found in networks
including a phenomenon known as duplex mismatch. Although the
phenomenon of duplex mismatch is fairly well known, it still occurs in
many networks. This is due to historical issues around the operation of
auto-negotiation features; companies establishing policies that interfaces
will be statically set for network planning and performance reasons;
misunderstanding of how auto-negotiation works; certain interfaces only
able to run 100 M full-duplex when set to autonegotiate; and especially a
mix of the above.
Designing the Real-time Networking Solution Infrastructure
The following examples of real-time networks require that we make some
simplifying assumptions. These are fictional network deployments.
Therefore, we need to establish customer requirements. We assume that our
fictional customer requires the following:
544 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Services. The services for a real-time network include voice, video
streaming, video/audio conferencing, e-commerce, file, database,
and print services.
Performance requirements. Industry accepted performance
standards for a converged real-time network include resiliency,
availability, and security.
Network size and geography. An analysis of service quality
requirements for all areas of the network has been completed and a
corporate network structure analysis focusing on national
telecommunications requirements has been completed and
confirmed by the carrier or/and service provider. The network
design considerations include a data center with regionally
distributed voice/data services, one National headquarters building
in New York with 3000 employees with eight floors, two regional
officesone in Santa Clara and the other in Orlando with 400-500
employees with four floorsand two district offices, one in Dallas
and the other in Minneapolis with fifty employees with two floors.
For this example, the customer wants to implement VoIP to all employees
with a multimedia overlay. All employees will have access to unified
messaging. Additionally, the customer wants to deploy a VoIP contact
center that is deployed across the three major sites.
Figure 23-1: Example network
Corporate
Office
District
Office
District
Office
Internet
Service
Provider
Virtual Circuit
Physical Circuit
Enterprise Solution
Regional
Office
Regional
Office
Redundancy Redundancy
Solution Solution
Resiliency Resiliency
Scalability Scalability
& &
Efficiency Efficiency
Performance Performance
& &
Management Management
Proven Proven
Reliability Reliability
Chapter 23 Private Network Examples 545
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Call Centers represent a unique challenge to Real-Time Converged
Enterprise Networks. When sizing a network for traditional VoIP
applications, an average usage is 612 CCS (36 CCS = one hour) and
generally has trunking requirements of 25%. In the case of Call centers
average, usage is usually figured at 36+ CCS and trunking requirements
range from 125% to 200%. Furthermore, Call Centers are generally
customer facing and mission critical therefore they require the highest QoE
possible.
We will assume that proper call load and network sizing studies have been
completed and the proper amount of bandwidth exists in the network to
support call loads. Note that even when there is adequate bandwidths in a
network to support all applications, QoS mechanisms play an important
role in providing QoE.
The last assumption identifies scarcity levels of resources. For this
example, we will assume that needed resources are available.
Developing the core network
Several options exist when designing a telecommunication infrastructure
for an organization. The first step was to identify the purpose of the
network. In this design demonstration, we identified a VoIP, multimedia
and data solution with specific QoE requirements. Core telecommunication
choices are as follows:
Frame relay, ATM
Multiprotocol Label Switching (MPLS)
Wave Division Multiplexing (Coarse Wavelength Division
Multiplexing [CWDM] or Dense Wavelength Division
Multiplexing [DWDM])
Resilient Packet Ring (RPR)
Routed (Internet Service Provider)
546 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 23-2: Core network
For the purpose of this design example
1
, we chose ATM and frame relay
circuits supplied by a local service provider. ATM is chosen for the core
network because of its proven capability to deliver QoS in a packet
environment. Frame relay has been chosen for the branch/district offices
based on its ability to deliver high bandwidth at low cost. While frame
relay can prove to be challenging when it comes to real-time, converged
networks, we have kept it in this example due to its widespread adoption
and low cost.
Headquarters and the high performance data center
HDC must also support critical business applications. Enterprises can use
secure HDC to host business applications, implement firewalls or virtual
private networks, and provide storage services and content delivery of
static and streaming media. Consolidation of data services and centers
allows Enterprises to centralize critical computing resources, create virtual
data centers that span multiple locations, and reduce operational costs
without the performance penalties or security concerns typically associated
with remote access. Some key functionality of an HDC are as follows:
1. Please refer to the Disclosure Notice in the Section VI Introduction (page 498).
ISP
2
ISP
1
Chapter 23 Private Network Examples 547
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Distributed intelligence and Web-optimized components
Improve business continuity through massive processing
throughput and transport bandwidth
Support delivery of critical business applications for employees,
business partners, and customers
Consolidate or outsource data center functions without performance
penalty or security concerns
Extend network resources and reach across public shared facilities
Convergence-ready infrastructure to deploy VoIP services
A service strategy must be developed in order to provide services to end
users. The service strategy must include an infrastructure that can provide
these services. In a previous section, solution design attributes were
identified that must be addressed in the development of a high-performance
data center.
Headquarters and the Secure High Performance Data Center
To simplify the explanation of the design, lets start by describing how to
secure the access to the Data Center from the WAN links and Internet.
Afterwards, the design will be expanded to include the following abilities
to the data center:
Ethernet Switching and Load Balancing
Content Delivery Networking
Solutions Management
Closets and Aggregation Points
Voice Communication Services
In the previous telecommunication infrastructure analysis section, it was
decided that ATM would be used to connect the headquarters with the
regional offices and frame relay would be used from the regional offices to
smaller district offices.
548 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 23-3: Secure high performance data center
Nortel provides two alternatives for identifying the ATM to be used.
Passport* 7K multiservice platform or a Contivity* VPN device can be
used to establish an ATM link to the regional offices. Both provide ATM
connectivity but only one can provide Layer 3 services, firewalling and
VPN tunneling ability to secure WAN links. This requirement was
established during earlier assessments and the Nortel Contivity solution
was identified as meeting these requirements.
Even though these links are internal to the organization, security is still a
concern. Organizations may be just as vulnerable to internal security
breaches as to external ones. To secure a 4 Gbps ATM link, a high
performance firewall will be needed. The Alteon* Switched Firewall will
address the security requirements for these links. Any firewall can be used;
but, typically, most firewalls are limited to 1 Gbps throughput. Nortels
Firewall solution provides an accelerated switched firewall with four
Firewall Directors and two Firewall Accelerators used to address resiliency,
redundancy, performance, and efficiency of the security solution. It also
provides much higher throughput (up to 4.2 Gbps).
2
Attached to the Firewalls will be a Demilitarized Zone (DMZ) to provide
Web services and LDAP directory services.
2. Please refer to the Disclosure Notice in the Section VI Introduction (page 498).
Chapter 23 Private Network Examples 549
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Figure 23-4: Firewall security
The diagram describes three main areas, Demilitarized Zone (DMZ),
Secure Voice Zone (SVZ), and LAN (data center and user access). Notice
that the DMZ and SVZ are isolated. The first level of security for External
user access is protected by the Contivity CheckPoint firewall. The
Contivity also provides secure branch VPN tunnels to remote office
location and individual user tunnels for teleworkers from the Internet. With
a valid tunnel established, secure users are granted access to the second
level of security. The Alteon switched firewall is commonly referred as an
ASF. Based on established filters, users are granted access to the LAN,
Data center and SVZ. Both Firewalls protect the DMZ from hacking
attacks such as Denial of Service Attacks (DoS) and SYN-ACK attacks.
Filters can also be established to detect malicious code.
Both the Contivity Platform and the ASF are scalable, redundant, and can
be configured for resiliency to ensure a Quality of Experience and high
level of availability.
The Secure Voice Zone (SVZ) and DMZ are protected from both internal
and external users and can be extended to include Data center application
servers.
The Passport 8600 switches are interconnected through an Interswitch
Trunk (IST) and will provide a resilient, redundant Split Multilink Trunk
DMZ
Secure Voice Zone (SVZ)
Data VLAN & Telephony VLAN
Passport 8600 /w Web Switch Module (WSM)
Conti vity VPN / Firewall
Succession
IP PBX
PSTN
Desktop PCs
IP Telephones
Software Telephones
I nt er net
Ser vi c e
Pr ovi der
WAN
Ser vi c e
Pr ovi der
MSC 5100 Call
Pilot
Symposium
Call Center
Server
OTM
SIP/PRI
Gateway
FTP & Web Servers LDAP
Servers
Al teon Switched
Firewall Firewall Security
Firewall Security
550 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
(SMLT) core for the data center. The two Passport 8600 switches
connected in series provide the SMLT capabilities that connect all LAN
wiring closets with fill redundancy and subsecond resiliency.
Ethernet Switching and Load Balancing
The Passport 8600 integrated with a Web-switch Module (WSM) will load
balance the two Contivity at the customer edge to provide VPN services to
secure remote access for internet users and internal locations.
Internet Services can either be provided through the current ATM service
provider or through another. If another Service Provider (SP) is used, the
connections from the SP will terminate on the Firewall. Multiple Internet
connections by different Service Providers can be used to enhance
resiliency and redundancy, availability, and reliability of internet services.
Figure 23-5: Ethernet switching
The Passport 8600 was selected for many reasons, but primarily for SMLT.
SMLT enhances resiliency capabilities of the network by reducing the
reconvergence times to below one second. Third party validation of SMLT
resulted in a recorded less than one second of packet loss. Spanning tree
(802.1d) or EnhSTP (802.1w) cannot guaranty that no session failures will
occur in the event of a module or switch failure. SMLT in the event of any
failure, link, Passport module, or a switch within the pair, will result in less
loss than the required three seconds time out for a network session. So, in
the event of a failure in any component of the two Passport 8600, the data
session and voice calls will not be terminated and user performance impact
will be kept to a minimum.
Chapter 23 Private Network Examples 551
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
The design addresses resiliency
Resiliency is a Solution Design Attribute (SDA) that defines multiple
concurrent network data flow paths from source to destination and provides
a reconvergence method that reduces the amount of time needed for a
physical or logical recovery of network in order to ensure client session
integrity.
Resiliency in a network environment refers to those logical protocol
mechanisms that both sense network outages and switch to new resources
quick enough to maintain sessions or have a second path that can be
switched to in time to maintain sessions. Resiliency is similar to the
concept of redundancy; however, redundancy in our definition refers only
to hardware and has a different set of qualities. Resiliency includes such
protocol and mechanisms as SMLT, Load Balancing, Virtual Router
Redundancy Protocol (VRRP).
SMLT dramatically improves the reliability of Layer 2 networks. It
operates between a buildings wiring closet (edge) switches and the core or
aggregation switches for the building. An IST makes a pair of Layer 3
switches appear to be a single switch to all devices attached. They are also
in Active-Active mode and load share incoming traffic utilizing capital
investment to its full potential. No links are blocked and bandwidth is
maximized.
SMLT enables rapid fault detection and forwarding path modification
allowing this type of environment to recover from a partial or full failure of
an aggregate switch. This innovation adheres to reliability standards
eliminating any single point of failure in the LAN/WAN engineering
design. Compared with Enhanced STP, SMLT recovers from a failure of a
link, module and aggregate switch in less then one second. Since all SMLT
links are predefined, SMLT is only concerned with the ports that are
associated with the specific SMLT link that failed and does not require any
adjustment to any other portion of the network Although SMLT is
primarily designed to enhance Layer 2, it also provides benefits for Layer 3
(Routed SMLT, R-SMLT) networks as well, by reducing network
convergence delays.
For Layer 2 bandwidth optimization, SMLT allows traffic to utilize all of
the available links between switches and devices. Unlike STP that blocks
ports whenever a loop is detected, SMLT sends traffic over all available
links. Load sharing is achieved by the MLT path selection algorithm used
on the edge switch. This is accomplished on a SRC/DST MAC address
basis. Load sharing in the reverse direction is achieved naturally by traffic
arriving at both aggregation switches and being directly forwarded to the
edge switch and not over the IST trunk.
Avoidance of spanning tree recalculation from the edge switches uses a
standard MLT connection, which make the two aggregation switches
552 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
appear to be a single switch. In this scenario, spanning tree is disabled and
convergence times if a switch or link failures will be in the subsecond
range. Both STP and SMLT eliminate a single point of failure within a
network topology. In a typical SMLT configuration, any single link or one
of the aggregation switches can fail and normal operation will be restored
within one second. This can be compared with STP, which is dependent on
the size of the network to quantify the convergence time.
This subsecond reconvergence allows IP session to stay connected, which
is crucial to IP telephony applications. Because today's IP infrastructures
demand high availability, the technology used should support redundant
(and load sharing) switch fabric/CPU modules with subsecond fail-over.
An enhanced resiliency can be provided using Split-Multiink Trunking (S-
MLT) and Routed Split Multilink Trunking (R-SMLT), eliminating the
need for the Virtual Router Redundancy Protocol (VRRP).
Nortel has developed two protocols SMLT and R-SMLT eliminating the
need for inadequate re-convergence delays delivered by Spanning tree
(STP), Enhanced STP (802.1z) and VRRP for deploying real-time
networking solutions.
Adding in application switches and keeping in mind redundancy and
resiliency, SMLT will be used to connect two external Alteon application
switches. These application switches will control load balancing and
content redirection. The application switches will be connected to the
Passport 8600s and provide load balancing services to the Dual-homed File
and print servers.
The design addresses reliability
Reliability is an SDA defined as any component or feature in a system that
consistently produces the same results, meeting or exceeding its
specifications.
Reliability comes mainly from the voice world. It was an extension of the
concepts of redundancy and MTBF. In other words, it took into account the
combined effects of redundancy and MTBF to the effect on uptime of the
system. Five nines equates to a total of twenty minutes downtime a year.
The design addresses redundancy
Redundancy is an SDA that reduces the probability of errors in
transmission by duplicating hardware devices within a solutions design or
component with a device.
In our definition, redundancy refers strictly to hardware. This is done
because of the different nature of redundancy between hardware and
software. In the case of hardware, redundancy can range from partial
redundancy such as incrementally providing redundant components such as
power supplies, link hardware, and central processing all the way to what is
Chapter 23 Private Network Examples 553
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
referred to AB units where the entire hardware platform is duplicated
redundancy also comes in different levels of redundancy that is hot, warm
and cold.
Hot (Active-Active) redundancy is defined as the capability of
switching to hardware automatically without loosing sessions.
Warm (Active-Standby) redundancy is defined as the capability of
switching to hardware automatically; however, sessions may be lost
due to slow convergence.
Cold redundancy is defined as the capability to manually switch to
redundant hardware.
The design addresses performance
Performance is an SDA that quantifies the maximum throughput capacity
of a solutions design and identifies bottleneck and potential delays within
the solution.
Performance should be characterized with end-to-end metrics and
measured with objective tools. For instance, if we have set a QoE
expectation of near toll quality, then we should be able to transmit a voice
flow across the network and achieve R = 80 or higher. The R score takes
into account effects of delay, loss and jitter.
Performance targets should be set for all applications that are
determined as requiring a certain level of QoE, for instance
The first example on voice of having an R score of 80 or higher
ERP applications may be identified as having a certain QoE based
on response time or other criteria
Certain session-oriented applications may have a QoE that focuses
on loss
The design addresses efficiency
Efficiency is an SDA that quantifies input and output processing delays
within a solution.
In our definition, efficiency refers to the traditional impairments identified
in real-time protocol analysis: jitter, delay, loss and throughput.
Here we refer to the networks design and protocols implemented ability to
optimize latency, jitter, throughput and packet loss to meet our QoE
objectives.
The design addresses effectiveness
Effectiveness is an SDA that quantifies the quality of being able to perform
intended IP Telephony and data application functions.
554 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
We define effectiveness as the tenant that determines on a subjective level if
we have met our QoE targets.
Do users perceive the quality of a voice call commensurate to that
committed that is toll quality, near toll quality, and cell quality?
Do data users perceive their QoE acceptable?
Does the network provide the level of availability specified in QoE?
The design addresses Security
Security is defined as a tenant in QoE because it and/or the lack of it can
impact many of the other tenants to QoE.
During the move to a converged network, we will need to ensure that real-
time applications are secure and that business assets are protected against
malicious intent from both external and internal threats. Steps need to be
taken to safeguard the confidentiality, integrity, and accuracy of network
communications, that is, what should remain private, stays private.
Multilayer security. A multilayer approach to security will enable
you to offer variable-depth security, where each additional security
level builds upon the capabilities of the layer below. For example,
basic network compartmentalization and segmentation can be
provided by VLANs. A second layer of security can be achieved
through the use of perimeter and distributed firewall-filtering
capabilities at strategic points within the network. VPNs can be
added as a third layer for finer-grained security. The use of firewalls
and traffic managers capable of deep-packet inspection and
application-level gateways will provide additional security against
attacks directly targeting applications or payload (content).
Security for administration personnel. The more open the
Enterprise and the more centralized the network management
system, the greater the requirement for stringent security for
network management processes. This is especially true when the
network is in a transitional state. Access authority and privileges
granted to network management personnel must be carefully
secured to protect network configuration, performance, and
survivability.
Continuous security policy management and enforcement
A properly designed and implemented security policy must identify the
resources in the Enterprise that are at risk and address how to mitigate
threats. The security policy must enable vulnerability assessment, define
appropriate access control rules, and help identify and discover violations.
Risk and vulnerability assessment must be performed at all levels of the
network.
Chapter 23 Private Network Examples 555
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Without appropriate security features, VoIP networks are much more
vulnerable to eavesdropping, theft and denial of service than traditional
telephony networks. Logical security, where the system is contained to an
isolated intranet protected by firewalls, is not sufficient in todays
environment. A system can be subjected to internal attacks from a
malicious user or from a pervasive worm that is transferred to a hard drive.
IP telephony systems are now connected to the corporate intranet or the
Internet. As such, security needs to be enhanced to address the new world
threats.
Security in an IP environment is based on the following components:
Physical and logical security of the infrastructure (such as end
points, switches, and routers)
Network Element (NE) security of all the system components
Equipment security of the servers and other hardware
Software security of the applications
Client security regarding access control and privileges to the
systems
Security of soft clients on multiuse personal computers
Demilitarized Zone (DMZ). A DMZ is a term first used in complex
multiple machine firewall setups, where a computer is placed outside the
firewall, but is still available for use by the internal (protected) network.
The advantage of a DMZ computer is it can use and receive the entire
Internet. The disadvantage is that it may be vulnerable to attack from
parties unknown.
Secure Voice Zones (SVZ). Securing telephony is an important step in a
comprehensive security strategy. A secure telephony solution framework as
part of a unified security architecture leverages both traditional resilient
telephony switched networks levels and a sustainable migration path to a
converged IP network. All levels of security call for a secure voice or IP
telephony zone since all IP telephony servers are vulnerable to attack,
malicious or otherwise from within the Enterprise as well as outside. A
stateful firewall with SIP and H.323 protocol support is needed to provide
an SVZ and four levels of security, minimum, basic, enhanced and
advanced that are based a unified security architecture to ensure these
critical servers and call servers are highly available.
Security. SVZs for IP telephony devices must ensure accessibility without
compromising the confidentiality and integrity of other Enterprise network
resources.
556 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
The design addresses ability
Ability is an SDA that describes abilities inherent to a solution and
provides the quality of being able to perform and achieves functional
requirements for QoE independently with the use of predefined rules,
filters and content redirection.
We define ability as the capability of a product or solution to be able to
sense problems and creatively/uniquely implement technologies/protocols
to provide a level of service the network was designed to achieve.
The design addresses scalability
Scalability is an SDA that defines how well a solutions design is capable
of handling growth and expansion and addresses issues inherent to small,
medium and large deployments.
When designing a network, scalability refers to the ability for the network
to grow from the initial installation and capability to some projected
growth and capability over a known/defined horizon. For example, a
network may be implemented for twenty sites, 10,000 users and meant for
voice and data initially but be able to scale to support forty locations,
15,000 users and, additionally, implement multimedia and e-commerce
over the next five years.
Note: It is not always feasible to build the initial network to be scalable to
future network growth and requirements; however, these should always be
taken into consideration.
The design addresses interoperability
Interoperability is an SDA that defines the ability to exchange and use
information provided by other solutions vendors.
We define interoperability as one of the tenants because it can have such a
tremendous impact on the rest of the network if it is either overlooked or
not given enough consideration. Many times interoperability is minimized
because it is assumed that if multiple product or vendors implement the
same protocols or standards that they will easily integrate and interoperate.
Regretfully, this is not the case.
Sometimes strict control of changes in code, bug fixes and rev level
changes should be thoroughly documented and tested before
implementingdepending on the level of interoperability required. For
instance, interoperability at the PPP level requires much less consideration
than interoperability at the SIP or H.323 level.
Content Delivery Networking
Nortels Alteon Application Switch helps put an end to the brute force
approach. The Alteon Application Switch is a multiapplication switching
Chapter 23 Private Network Examples 557
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
system that allows Enterprises to maximize existing investments in servers
and networks through application-intelligent traffic management and
integrated application support. The switch also allows service providers to
efficiently enable differentiated services for their Enterprise customers.
An Alteon Application Switch is a multiapplication switching system that
performs wire-speed Layers 2/3 switching and high-performance Layers 4-
7 intelligent traffic management for applications such as server and
network device load balancing, application redirection, security, and
bandwidth management. The Alteon Application Switch is commonly used
in server farms, data centers, and networks. Based on a next-generation
version of the proven Alteon Virtual Matrix Architecture and the
application-rich Alteon OS Traffic Management Software, the switches
were built from the ground up as specialized high performance Layers 4-7
application switches and enable the broadest range of high-performance
traffic management and control services.
The Data Center is built and is able to provide basic services, such as
Virtual Private Networking, File and Print Services, and Web Services in
the DMZ. We also enable the design to support a secure voice zone and
maximized or technology investment.
Enterprise Content Delivery Networking (CDN) solutions can be grouped
into two primary categories: solutions designed to accelerate the
performance of external Web and data center infrastructure and solutions
designed to enable high-quality intranet content delivery for applications
such as e-learning, corporate Webcasts, and WAN bandwidth savings.
Figure 23-6: Content delivery
558 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
In order to optimize the delivery of services such as secure web access
(HTTPS) and improve end-user Internet response to reduce the world wide
wait syndrome, tools can be used to enhance the data center. An Alteon
Content Director will be connected to each Passport 8600 and load
balanced through the Web Switch Module (WSM) in each Passport 8600
delivering user-aware request-to-content routing capabilities based on
Layers 3-7 protocol attributes and true user proximity. End user requests
will be serviced by the fastest server pool for that particular request.
From the application switches, two devices will be load balanced: an
Alteon Content Cache, enabling high-performance caching, streaming, and
filtering services of end user content and pre-positioning web content
closer to the end user, and an Alteon acceleration appliance, which
intelligently speeds application traffic by offloading specific compute-
intensive functions from servers. This includes the Alteon SSL Accelerator
for high performance SSL offload and SSL VPN, and the Alteon Content
Cache for high performance caching and streaming.
Figure 23-7: Management services
Chapter 23 Private Network Examples 559
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Solutions Management
Several management service services are added in order to control the
HDC, a security Manager to manage the Firewalls and Contivity devices
and a Network Optivity* NMS, OSM, and QoS Policy Manager to monitor
network devices and establish QoS policies and packet prioritization.
Content Manager is added to control Content Caches within the network
and schedule and control the prepositioning of content to those caches.
Another device added is the Wireless Lan Security Manager to control and
secure wireless access points with the corporate headquarters.
Partnering with NetIq* (VoIP performance monitoring and testing tool) and
combined with Optivity Network Management System (NMS) proactive
network monitoring of real-time applications is conducted. If an event such
as high delays in the network that could negatively impact voice quality
occurs, an immediate notification is triggered and Optivity NMS identifies
the location of the potential fault.
The design addresses manageability
Manageability is an SDA that provides the capability to manage, define and
control the solution.
There are a number of issues in management that are more critical in real-
time, converged networks than transitional data networks. This is not to say
they are not as important to data networks they have just been overlooked
due to complexity.
Management interfaces should be highly secure. Management systems
need to be able to proactively monitor network bandwidth and QoE goals
to assure that service levels can be met. Management systems should be
able to communicate to probes in devices that will report back statistics on
QoE.
Real-time and historical network performance monitoring is needed of IP
telephony that can be managed and effectively supported in the Enterprise
environment. A Network Management System (NMS) provides a fast and
efficient way to manage and troubleshoot networks. A powerful application
provides a comprehensive set of network visualization, discovery, fault,
and diagnostic capabilities for identifying problems before they impact
network services.
Multicast real-time performance monitoring that enables Enterprises to
reliably utilize webcasts, streaming video and collaborative applications.
Note: Proactive Voice Quality Monitoring tools (PVQM) from Nortel
can provide extremely valuable metrics to enable predictive analysis of
potential degradation in voice quality, as well as allowing the quality of
voice to be used as a key parameter in an ongoing SLA. Following the
560 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
implementation of the converged network, the use of PVQM tools
would allow monitoring of voice quality as the traffic patterns change.
Also needed is a centralized network access management that enables
network managers to protect the perimeter of the network from attacks that
could impact performance or availability.
Enterprise-wide security policy management plans to increase a company's
defense posture by centralizing updates and enforcing compliance. This is
a system-level software application designed to manage the traffic
prioritization and network access security parameters for business
applications in the Enterprise-networking environment. Network managers
can take a proactive approach to bandwidth management, security, and
prioritization of business-critical traffic flows across the Enterprise.
Centralized QoS provisioning enables network managers to scale a QoS-
enabled network to support new applications and services. Common Open
Policy Services (CoPs) specifies a simple client/server model for
supporting policy control over Quality of Service (QoS) signaling
protocols (for example, RSVP). Policies are stored on servers, also known
as Policy Decision Points (PDP), and are enforced on clients, also known
as Policy Enforcement Points (PEP).
Closets and Aggregation Points
If the Data center also had to provide wiring closet to areas within the same
building, SMLT is used to connect each wiring closet to both Passport
8600.
Wireless LAN access can also be added with the use of a WLAN 2220
wireless access point and the security is managed by the WLAN security
switch (WSS) 2250. An adaptive Wireless LAN solution is also available
with Access Ports (WLAN 2XXX) and WSS 2270 providing security and
end-user roaming capabilities.
Several underlying protocols and services are used to maximize the
manageability and performance of this solution. DHCP is used to provide
IP address, network mask, and default gateway to devices.
Users are used to having phone service maintained during power outages
for emergency calls, including 911. This requires the consideration of
Power over Ethernet (POE) (802.3af), which is a new standard to provide
power to hard VoIP clients. There are a number of issues that have to be
addressed. Before picking a strategy, certain business and regulatory
aspects need to be considered including: 911 services, redundancy, heat
dissipation and power requirements.
Before POE, most VoIP phones got power from a power brick that was
plugged into a standard power outlet. In the case of a power failure, the
phone would be inoperable. If POE is implemented, it is important to make
Chapter 23 Private Network Examples 561
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
sure that the power source for POE will continue to operate in the case of a
power outage.
When sizing POE requirements, a number of additional issues need to be
taken into consideration to include: redundancy, survivability, power draw,
heat dissipation and air conditioning. Usually POE is implemented in the
wiring closet and the last three items are overlooked, which create other
problems and additional cost.
IP clients can be assigned IP addresses basically three different ways:
statically, partial DHCP and Full DHCP. A static IP strategy is the most
secure, but also the most costly and cumbersome to implement. A Full
DHCP strategy is the least costly and easiest to use from the user
perspective but does introduce some security risk. VoIP and multimedia
clients can use either an existing data DHCP server or provision a separate
DHCP server for VoIP and multimedia applications. It is preferable to
provision a separate DHCP server for security and performance reasons.
Furthermore, the VoIP and multimedia DHCP server should be placed in a
Secure Voice zone with other VoIP and multimedia components such as
Call Servers and application servers to limit access.
The voice solution
The second part of this chapter describes the voice solution of this private
network example.
Voice and Multimedia Communication Services
Voice and multimedia communications services in the IP World are
delivered over a number of different components: Call Servers, Gateways,
Clients and Application Servers. Each of these components provides a
combination of services found in the legacy voice world and required in the
new IP World referenced as Voice over IP (VoIP) or IP Telephony.
While VoIP and IP Telephony are used interchangeably in the general
marketplace, VoIP basically refers to any system that can carry Voice over
IP, where dial tone is the minimum requirement and additional services are
optional. IP Telephony is an attempt to differentiate between basic dial
tone services and the level of service customers have been accustomed to
from traditional TDM-based Voice Switches, commonly referred to as
PBXs.
Many new vendors entering the market have propagated the notion that
PBXs were broken and needed to be replaced by VoIP. They have implied
that customers cannot move to new innovative services without starting
from scratchin general, the marketplace now realizes that the traditional
PBX can be migrated into the new world providing both the services that
customers have grown accustom to and overlaying new innovative services
without any service disruption to the existing users. In fact, all that has
562 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
happened is a new transport medium (IP) has been introduced to the world
of Telephony.
At the basic level, VoIP is comprised of two changes in the architecture of a
voice system; first, using an IP network as a connection medium to clients
and second, using the IP network as a backplane to connect different
component Applications of the voice system. VoIP greatly simplifies the
task of Computer Telephony Integration (CTI) by converging both voice
and data at the IP layer.
The introduction of Session Initiated Protocol (SIP) allows true integration
of new multimedia services with traditional telephony services and
applications such as Unified Messaging, Conferencing and Call Center.
New multimedia services include; Instant Messaging, desk top video
conferencing, collaboration, follow-me services and personal control.
Figure 23-8: SIP Solutions
The location of the Call Servers, Application Servers and DHCP servers
can be anywhere in the network as long as they are reachable from a
connectivity perspective. From a performance and security perspective,
their location and proximity are very important. Generally speaking, it is
best to assure for security reasons that Call Servers, Application Servers
and DHCP servers are located in a Secure Voice Zone. Additionally, they
should be within a minimal network delay (150 ms end-to-end) and the
network segments providing signaling support should be as redundant and
resilient as possible to assure dial tone.
Many times, Enterprise networks will be confronted with constrained
network resources due to limited budgets or available technologies. From a
business perspective, certain customer applications may be considered a
higher priority than voice. However, prioritizing them over VoIP may have
a serious impact on the delivery of voice services and consequently on
business operations. Quality of Service mechanisms need to be employed
both at Layer 2 and at Layer 3 to ensure that voice packets do not suffer
Chapter 23 Private Network Examples 563
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
from delay or jitter performance degradation under network load. Likewise,
Ethernet hubs should never be used to terminate IP clients and Call Servers
as their shared collision domains make the possibility of any reasonable
QoE to be achieved virtually impossible.
VoIP architecture and requirements
3
The example customers network is comprised of a number of different
sites, applications and requirements. The network has the following site
requirements:
Two existing major sites (New York and San Francisco) that have a
traditional PBX with 12,000 existing TDM users of which 2,000
users need to be mobile is required.
A new campus in Los Angeles for 10,000 new users with
geographical redundancy is required.
The new campus requirement in Los Angeles is the new
corporate headquarters and the customer is looking at
maintaining the rich telephony set of features they currently
have in their voice network but want to move to IP Telephony.
They are looking for a fully distributed solution that has
geographical redundancy, which is defined as the ability to
distribute redundant call servers in different locations to provide
site redundancy.
This site will be required to support 10,000 total users. The
customer has determined that 2,000 of the users will receive
sufficient service from digital sets that were freed up from the
existing PBX site.
Support a new regional office in Chicago with 800 users.
The new regional office in Chicago has a requirement for 800
users and the customer wants to have the rich set of telephony
features they already are accustom to, but implemented on a
pure IP Telephony platform. They want network transparency
and a common dialing plan for their VoIP network.
Support a growing branch network of twenty to thirty users per
branch.
The customers new branch office requirement is for much
smaller sites and a different division of the company. They are
looking are looking for a centralized call processing approach
with the same feature set as the rest of the division due to the
mobility of the users.
3. Please refer to the Disclosure Notice in the Section VI Introduction (page 498).
564 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
The customers existing branch offices are supported by
traditional key systems. The customer wants to upgrade these
sites to be integrated into the overall network and upgrade them
to VoIP capable systems. The customer wants to consolidate all
voice and data requirements in the branch into a single
platform. This includes data and voice services to include:
Routing, VPN, VoIP, TDM set support, Call Center, Unified
Messaging and web support.
In this case, we will assume the customer recently acquired a
company that had an installed Norstar* key system that can
easily be upgraded to BCM maintaining the majority of the
investment in technology, training and support.
Integrate a newly acquired company that has a key systems.
Support a distributed call center application.
The distributed IP Call Center site is a customer requirement for
Call Center capabilities to be distributed across multiple times
zones and corresponding locations as a single seamless
operation.
The customer is looking for a distributed IP Call Center to
support both internal and external customer needs. The Call
Center needs to include the following capabilities and key
functionality:
Powerful, Skill-Based Routing. Skill-based routing means that
you can intelligently route callers based on their needs and to
the agent that is best suited to fulfill customers' needs.
Seamless Networking Environment. Networking provides an
efficient, streamlined solution for centrally managing multiple
call centers.
Adaptable Call Handling. Rich, flexible scripting language
allows your business to customize call routing decisions and
treatment based on your business processes.
Graphical, Real-Time Displays. Real-time displays provide a
snapshot of the call center for management to view customized
performance statistics for increased responsiveness to changing
conditions.
Complete, Customizable Reports and Call Tracking. Have
standard reports and the ability to customize historical reports.
Provide unified messaging to all users
Chapter 23 Private Network Examples 565
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
The customer is looking for a unified messaging platform that
will integrate voice mail, e-mail, IVR and fax on a single
platform and that can be closely integrated with other platforms
such as Lotus Notes* and Outlook*.
The intent is to provide the following key values:
User productivity, for mobile and stationary desktop user, better
message organization and prioritization.
Integrated architectureno impact to e-mail systems and the
data networkeasier to deploy, better reliability.
Hands-free access to messages with speech activated
messaging.
Application builder to create caller services, including auto
attendant menus and fax on demand.
No toll costs for message networking using the VPIM
networking standard. Ability to send voice or fax messages
from one unified messaging system to another over an IP
network such as a corporate intranet or the Internet.
Provide multimedia services to all users.
In this example, all applications are assumed to provide a high level of QoE
and that resources and bandwidth are available to support these
requirements. Figure 23-9 illustrates the logical view of the example
network.
566 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 23-9: Network Overview
Call Server
The Call server provides the basic telephony services traditionally found in
the PBX along with new services required to deal with an IP infrastructure.
Depending on a customers requirements and network infrastructure, call
servers may deployed in either a centralized or distributed architecture,
each has its advantages and disadvantagesneither architecture can be
considered best without first understanding customer uptime requirements,
network infrastructure and carrier capabilities if a WAN is required.
In a centralized call server approach, uptime is heavily dependent on the
capability of the Wide Area Network (WAN) to be responsive and resilient.
Without proper provisioning, network outages can potentially even affect
local services including intrasite calling and 911 capabilities.
The real key is to have a common dialing plan and feature transparency
across the network. This can be achieved with a distributed architecture
that, takes the burden off the WAN for all capabilities but media path
bandwidth.
When implementing call servers for either VoIP or multimedia they should
generally be put in what is referred to as a Secure Voice Zone (SVC). This
is simply a subnet that requires special VPN and/or Firewall access to
Chapter 23 Private Network Examples 567
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
protect in from security attacks and isolate it from any unrelated traffic.
The concept of an SVC can be applied to multimedia call servers, which
can in the future be referred to as a Secure Multimedia Zone (SMZ).
Figure 23-10: Secure Voice Zone
No single call server fits all circumstances. In preparing a potential solution
to the problems posed in our example network, we will look at the options
available from the Nortel product line.
Nortel Call ServerOptions
Nortel offers four Enterprise call servers depending on the size and
requirements of the system; Succession* 1000, BCM, MCS5100 and
CS2100. Nortel also offers two carrier grade call servers; MCS5200 and
CS2000. This chapter discusses only the Enterprise call servers: BCM,
Succession 1000, and MCS5100.
The Nortel solution set allows both a centralized and distributed approach.
The Succession 1000 is a family of Call Servers designed to support
medium to large Enterprise customers and supports up to 16,000 users on a
single system and can network systems together to look like a single
system to the user providing feature transparency across the network. The
Succession 1000 provides a rich set of telephony services based on the
Meridian 1* product and is closely integrated with CallPilot* (Unified
Messaging) and Symposium* (Call Center).
The Succession 1000 family of Call Servers is comprised of the Succession
1000M, Succession 1000E, and the Succession 1000S.
The following table lists the Nortel Call Server options.
Multimedia
Server
Call
Server
Gateway
Application
Server
Signaling
Server
Firewall
Network
Secure Voice Zone
Multimedia
Server
Call
Server
Gateway
Application
Server
Signaling
Server
Firewall
Network
Secure Voice Zone
568 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Site Call
Server
# Users Features Positioning
Existing
Major
Sites San
Francisco
and New
York
CS 1000M 100 to
15,000
Users
Full Meridian* Feature set
PRI & H.323 trunking
SIP Signaling
Full Multimedia capabilities
with MCS5100 Integration
Centralized Call Server and
Gateways
Distributed Remote Gateways
Redundant
Predominantly
TDM
Existing
Installed
customers
Protect
Investment
New
Campus
Require-
ment Los
Angeles
CS 1000E Up to
15,000
Users
Full Meridian Feature set
PRI & H.323 trunking
SIP Signaling
Full Multimedia capabilities
with MCS5100 Integration
Distributed Call Server and
Gateways
Redundant Call Servers,
Gatekeepers, Gateway and
Client Proxies
Geographical Redundancy
Predominantly
IP
New Large
Sites
Regional
Office
Chicago
CS 1000S 100 to
1000
Users
Full Meridian Feature set
PRI & H.323 trunking
SIP Signaling
Full Multimedia capabilities
with MCS5100 Integration
Distributed Call Server and
Gateways
Redundant Gatekeepers,
Gateway and Client Proxies
Predominantly
IP
Small to
Medium sites
Chapter 23 Private Network Examples 569
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Table 23-1: Nortel call servers
Existing major sites
At the existing major sites, the challenge is to upgrade the existing TDM
switch to be able to convert 2,000 of their users to IP Telephony and have
an integrated solution that will allow the existing 10,000 other users to
communicate seamlessly with the new 2,000 IP Telephony users and the
rest of the network.
There are several major areas of concern when upgrading an existing major
site to IP Telephony: maintaining the feature set in which the users are
accustomed to, providing the users with the same level of voice quality,
providing proper security for the IP Telephony infrastructure, and making
sure that operational issues are taken into account.
It is imperative that not only you provide like features to the users but the
same features; otherwise, you will disrupt work flow and potentially create
New
Branch
Network
SRG 5-100
users
Full Meridian Feature set
when connected to central call
server
Provides PSTN Trunking in
Network
Provides local Branch
solution for small locations
Survivable
Small sites with
central control
Existing
Branch
Network
BCM 10 to
150
users
Norstar Feature set
Integrated routing and VPN
Integrated unified messaging
Integrated Call Center
small to
medium sites
that want all
voice and data
capabilities in a
single platform
Distributed
Call
Center
Atlanta
CS1000B 40 to
400
users
Full Meridian Feature
IP, Digital or Analog Trunks
SIP/H.323 Gateway Services
E911 Services
Local or Distributed Media
Services: Conference, Tones,
RAN/Music
Local or
Distributed
Applications
(i.e. Unified
Messaging, Call
Center Agents,
etc)
Site Call
Server
# Users Features Positioning
570 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
a huge training requirement to get the users and operations people use to a
new feature set. Obviously, new and upcoming features provided by the
multimedia wave will require training and operational preparation.
Voice users are used to high quality calls that are virtually nonblocking.
When moving to a data infrastructure, care must be taken to size the
network properly for bandwidth and call loads. In a traditional voice
system, only PSTN trunks have to be sized for traffic. In a IP Telephony
system, the entire network has to be sized for calling patterns and available
bandwidth. It is recommended that only switches are used and no hubs.
Additionally, a customer new to VoIP may want to implement a separate IP
network just for voice in the beginning. One strategy is to build a parallel
voice network and use the voice and data networks as backup to each other.
VoIP can run on any IP network it is a question of design,
troubleshooting and quality. Integration of real-time and nonreal-time
protocols can be a challenge. It is also recommended that QoS tagging at
either Layer 2 (802.1p) or Layer 3 (DiffServ) be applied so the network can
give priority to these flows.
In order to design a VoIP network properly, the data network has to be
thoroughly analyzed for available bandwidth. Any potential bottlenecks in
bandwidth should be identified in both the LAN and the WAN. Where
bottlenecks are identified corresponding analysis or projection of real-time
(VoIP) flows should be conducted; that is, traffic flows inter and intra
department for intersite, intrasite and external communications. These are
done similar to a traditional voice trunking exercise but done to all
constrained facilities and then converting the trunk requirements into
bandwidth requirements. A Network Assessment can be utilized to not
only determine bandwidth bottlenecks, but should also be done to identify
VoIP killing impairments such as duplex mismatch, which are commonly
found in networks.
Customers implementing VoIP want to be sure to take security into
account. While security is not discussed heavily in this book due to its
complexity, it should be noted that all VoIP call servers and gateways are
positioned in a Secure Voice Zone.
Chapter 23 Private Network Examples 571
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Call Servers at Existing Sites
Figure 23-11: Call servers at major sites
If this were a Nortel customer with a Meridian 1, an upgrade to CS 1000M
would be recommended for both existing major sites: New York and San
Francisco. To provide the 10,000 users with advanced services, Nortel will
implement converged desktop services with an MCS5100 overlay. If this
were another vendors PBX, MCS5100 could still provide a converged
desktop overlay and support the 2,000 users who want VoIP services
directly on the MCS5100.
Network recommendation for Major Sites
In our example, we assume that all Ethernet hubs in the network have been
replaced with Ethernet switches so that we dont have collisions and the
delay variation associated with collisions. Ethernet hubs should not be
deployed anywhere in a real-time network. QoS tagging has been
implemented to mark VoIP traffic with an EF DSCP code point. This
allows us to use policy filters on our routers to give voice the required
handling to ensure low delay and low jitter. We use Priority Queues for all
EF marked traffic.
VoIP has been put on its own VLAN for security and to achieve best VoIP
quality. A Network Health Check has been carried out to ensure that there
are no VoIP killing impairments in the network such as duplex mismatch.
Proper network documentation has been created that identifies the physical
and logical network structure, the QoS strategy to include QoS mappings,
proper call flow analysis (trunking), etc.
Bandwidth studies have been carried out to ensure that all links have
sufficient capacity. Both voice and data are permitted on the same links, but
we have chosen to ensure that the link is sized so that voice traffic is less
than 30% of the total link capacity. This both ensures that other traffic does
not get starved by voice traffic and ensures that, in the event of a network
failure, we can fail over to the new link without having to drop traffic.
572 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
New campus requirement
The major advantage to a new site is that the data infrastructure and
security can be designed to accommodate the requirements of a real-time
network. This includes the exclusive use of switches that are QoS and
security capable. Additionally, the use of gigabit uplinks and gigabit
backbones should be seriously considered. Even though it is a new
network, a Network Assessment should be completed to assure there are no
VoIP killing impairments in the network.
As mentioned in the major sites, all VoIP and multimedia flows should be
tagged and all VoIP infrastructure components should be in a Secure Voice
Zone. Since the Campus is the site chosen to host both the unified
messaging and distributed IP call center, special care should be taken to
assure that enough bandwidth is built into the network with loadsharing
implemented and QoS properly implemented to assure a high level of
service.
Call Server at New Campus
Figure 23-12: Call server at new campus
Assuming as before that this an existing Nortel customer, we would
recommend the CS 1000E for the new campus requirement. This new
headquarters site will be host to a centralized distributed IP Call Center and
Unified Messaging system. The Call Center will be discussed in both the
application section under Call Center and the Gateway section
Distributed Call Center Site. The new headquarters site will be the host for
the new small branch network. Primary call control will be from the new
corporate site. The branch gateways and functionality will be discussed in
the gateway section.
Network recommendation for New Campus
The new campus was engineered to the same specifications as articulated
in major sites. The carrier network for WAN access has both ATM and
MPLS implemented. A Service Level Agreement (SLA) has been specified
Up to 15,000 Users
IP Predominant
Communication Server
1000E
Up to 15,000 Users
IP Predominant
Communication Server
1000E
Chapter 23 Private Network Examples 573
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
guaranteeing a high priority level of QoS, and the corresponding QoS
Mappings have been defined.
Regional office
The regional office similar to the new campus has the advantage of building
a new network that can accommodate the requirements and bandwidth of a
real-time network. However, special consideration has to be taken as it will
be considered a smaller site than the campus and, therefore, the tendency is
to take shortcuts and cost cutting measures when building a network for a
location like this.
While this may be acceptable for a pure data network, a real-time voice
network always has to be built to the highest specifications if quality and
connectivity are the goal. This is a design goal that is built into voice
networks and never questioned. In general, a voice network has been built
to meet specific bandwidth requirements to carry a specific load of traffic.
Data networks traditionally were built based on a constrained budget and
the assumption was due to the difference between LAN and WAN
technologies that users would never have enough WAN bandwidth;
therefore, the key consideration was connectivity and as much bandwidth
as you can afford. This was acceptable because data networks can usually
afford to slow down and have the ability to recover. Voice networks have to
have their designated bandwidth or they have no connectivity. Furthermore,
voice networks do not have the ability to retransmit;, they use UDP not
TCP.
Therefore the challenge is to assure that the same care that is given to large
sites is also given to all sites if a network wide IP Telephony system is to be
implemented.
Call Server at Regional Office
Figure 23-13: Call server at regional office
If this were a Nortel solution, the CS 1000S would be recommended. The
CS 1000S was the first pure IP PBX solution offered by Nortel that
featured the Meridian feature set.
100-1000 Users
IP Predominant
Communication Server
1000S
100-1000 Users
IP Predominant
Communication Server
1000S
574 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Network recommendation for Regional Office
The regional office was engineered to the same specifications as articulated
in major sites and campus. Real-time applications deployed at the regional
office require the same level of support as major sites and the same level of
engineering. This will assure the same quality objectives are met.
Distributed IP call center site
Distributed call centers also have to be engineered properly. Most trunking
requirements for a normal IP Telephony system require about on average
2025% trunking. In the case of a call center trunking, requirements run
anywhere from 125200%. This is due to the fact that a call center user is
supposed to be on the phone basically 100% of the time, and call centers
generally expect to keep incoming calls in queues waiting for an operator
and be given a tree of options to pick from.
Call centers are many times the life blood of a company and, therefore,
need to be assured that if they go to IP Telephony, they indeed will have the
quality and availability previously experienced in TDM systems.
IP Call Center
The CS 1000B is recommended for the distributed Symposium IP Call
Center sites. The CS 1000B is a uniquely configured branch solution that
allows Symposium call center to be distributed to remote sites. The IP sets
on the CS 1000B are redirected to appear on the central site system in the
case of this network the CS1000E.
Network recommendation for Distributed IP Call Center
The Distributed IP Call Center was engineered to the same specification as
articulated in major sites and campus. The IP Call Center is customer
facing and mission critical. Therefore, special effort has been made to
ensure that proper engineering is conducted and achievable QoS objectives
are met.
New branch offices
Branch offices in most cases use frame relay services due to their low cost.
When ATM and frame relay were originally released, they were both called
Fast Packet technologythis was based on the de facto standard that at
Chapter 23 Private Network Examples 575
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
the time was X.25 (very slow). ATM was developed for all services
including real-time services, while frame relay was specifically for data.
Figure 23-14: Branch network
While frame relay can carry VoIP effectively, it is a big challenge and there
is no real QoS in frame relay, only congestion notification. Therefore,
special engineering and care needs to be taken if you plan on implementing
VoIP over a frame relay network.
The new branch network is shown in Figure 23-14. As can be seen, there
are a number of various speeds depending on the site location. When
dealing with frame relay, you need to be concerned with a number of issues
including the speed of the service, access rate, segmentation, shaping,
policing, pacing and PVC allocation.
Branch Offices
In this case, the Nortel solution would be the Survivable Remote Gateway
(SRG). It is recommended that the frame relay network be built as a full
mesh with separate PVCs for VoIP to assure the best possible QoE. It is
further recommended that all remote sites frame relay channels be put on a
full T-1 and not a fractional T-1. Adhere to proper shaping and pacing to
keep the ingress side of the frame relay network from applying policing
actions that may make any offending packets either Discard Eligible (DE)
or actually discard them.
576 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Network recommendation for New Branches
The new branches were engineered to the same specification as articulated
in major sites and campus. Frame relay has been implemented in the
branches with the following design:
Segmentation has been implemented to assure that data packets do
not introduce excessive jitter to voice packets.
Shaping has been properly configured to ensure that the network
will not inadvertently throw away legitimate traffic or make it
discard eligible. This will ensure unnecessary policing at the carrier
ingress point of the network.
Proper pacing has been implemented to assure orderly traffic flows.
Voice and data have been provisioned to have their own separate
DLCIs. If separate DLCIs are provisioned for voice, Segmentation
and Pacing are not as critical.
A full mesh of DLCIs has been built to reflect the voice traffic
flows. This will reduce latency and other possible degradations
A minimum of full T-1 access for all sites has been implemented to
minimize serialization delay.
Existing branch offices
The customer will be able to migrate to a converged IP network while
allowing the users to keep the feature set they are accustomed to without
retraining them or support personnel on a new system.
This network would also ride on a frame relay network. Please see Branch
Office (New) for a discussion on issues to deal with in a frame relay
network.
Branch Offices
In this case, the Nortel solution would be the SRG.
Network recommendation for Existing Branches
The existing branches were engineered to the same specification as
articulated in the new branches.
Gateways
Gateways basically provide some type of protocol conversion function.
Gateways in IP Telephony provide the same functionality to legacy
environments and to different connection protocols. There are proprietary
and third party gateways. Some of the basic gateway functions that may be
required to be performed are as follows:
VoIP to TDM phone
Chapter 23 Private Network Examples 577
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
VoIP to PSTN Facilities
VoIP to Legacy Applications
VoIP to Legacy (TDM) terminals
SIP to H.323
H.323 to SIP
Currently, there are two major and competing connection protocols:
Session Initiated Protocol (SIP) developed by the Internet Engineering
Task Force (IETF) and H.323 developed by the International
Telecommunications Union (ITU). Calls can be connected from an H.323-
based system to an SIP-based through a H.323 to SIP gateway. However,
only a small set of basic call features are supported.
The following table list the Nortel Gateway options.
Gateway Associated
Call
Server(s)
Features Positioning
1000B
Media
Gateway
All CS1000
Call Servers
Provides PSTN access in
network
Full Service solution for branch
up from 80 to 400 users
IP, Digital or Analog Trunks
SIP/H.323 Gateway Services
E911 Services
Local or Distributed Media
Services: Conference, Tones,
RAN/Music
Survivable
Local or Distributed
Applications (that is,
Unified Messaging,
Call Center Agents)
Provides local Branch
solution for med size
sites
Up to 400 users
1000T
Media
Gateway
All CS1000
Call Servers
Provides digital trunk PSTN
access
Capacity 20 PRI per SIP/H.323
Gateway
No users allowed
Network-wide resource
Unlimited quantity allowed in
network
Used as trunking
resource for CS 1000E
Can be used by any
CS1000 Call Server to
provide trunking as a
network-wide resource
578 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Table 23-2: Nortel Gateways
Clients
The customer has a requirement for a number of different clients and
services for all their sites. They prefer to have a ubiquitous service offering;
that is, seamless across the network. This implies support for many types of
clients and facilities to include: Wired IP, Soft IP, TDM, wireless IP, and
PDAs. It also includes TDM trunking and SIP & H.323 IP Trunking.
1000 E
Media
Gateway
CS 1000E Provides access to TDM devices
Physical: One Media Gateway &
Media Gateway Expander
Up to 30 per CS 1000E system
Capacity ~200 IPE cards per CS
1000E system: allows space for
Media Cards
Deployed on Campus
not on WAN
Part of the CS 1000E
system infrastructure
and are configured
based on line size, set,
and trunk types and
applications
1000S
Media
Gateway
CS 1000S Provides access to TDM devices
Digital/Analog Lines
Analog/digital Trunks
RAN/Music
Conference/Tones
Physical: One Media Gateway &
Media Gateway Expander
Up to 4 supported per CS1000S
Deployed on Campus
not on WAN
Part of the CS 1000S
system infrastructure
and are configured
based on line size, set,
and trunk types and
applications
Survivable
Remote
Gateway
All CS1000
Call Servers
Provides PSTN access in
network
Local Call Processing Fallback
Simplified Configuration &
Management
Full Service solution
for small branch with
5-100 IP Users
Hosted from
Centralized Call Server
(Feature Transparency)
Chapter 23 Private Network Examples 579
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Figure 23-15: Support clients and facilities
The decision to go with standards-based SIP or H.323 phones and
proprietary solutions that will work in the SIP and H.323 is a very heated
discussion. Some believe that standards-based is the only solution; but
these solutions, while reducing cost, generally lack in the rich feature set
that proprietary clients will provide.
IP clients are comprised of a number of devices to include: hard clients (IP
phones); soft clients (PC Clients); wireless clients (PDAs); and multimedia
clients that may support VoIP, Video, and Instant Messaging.
Deployment issues for these devices depend on a number of issues; but,
generally should be governed by a well thought out QoS and Security
Policy. The QoS strategy should include tagging at either Layer 2 (802.1p)
or Layer 3 (DiffServ). This not only provides QoS on the LAN, but allows
you to map these priorities to core technologies in the backbone.
Additionally, the voice or multimedia can flows can be separated onto
separate physical subnets or separate VLANs.
Clients
There are no best options here as Nortel supports a plethora of clients to
include: analog, digital, IP, wireless, PDA and third party devices on all call
server systems. The hard clients get their software load from the call server
they log into and, therefore, can be used on different platforms with no
changes.
Any Place
Any Time
Any Device
TDM
Wireless IP
Soft IP
PDA
Wired IP
TDM
Trunking
SIP & H.323
IP Trunking
Any Place
Any Time
Any Device
TDM
Wireless IP
Soft IP
PDA
Wired IP
TDM
Trunking
SIP & H.323
IP Trunking
580 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Applications
Applications cover both traditional applications found in telephony, such as
conferencing, unified messaging and call center, along with new
applications found in the multimedia revolution, which include instant
messaging, desktop video, collaboration, follow-me services and
customized personal control (Personal Agent).
Applications can be implemented in both a centralized and distributed
architecture. The choice between centralized and distributed applications is
a complex decision based on a number of cost, performance and scaling
issues. For instance, in the case of unified messaging, you need to evaluate
the overall requirement for individual sites and all sites together to first
determine if a single system could be used. If a single application platform
can support the overall requirement plus scale to the projected growth of
the network the next step would be to determine the network bandwidth
required to support a distributed environment, along with the cost and
performance.
If a distributed application approach is required, there are a number of
additional issues. The first is to measure the cost of duplicated services as
opposed to a centralized approach. The second issue is to determine the
complexity of networking multiple application servers over the network to
provide the same level of service and performance as a single server.
Network bandwidth will have to be evaluated even though it should not be
near the load of a centralized solution.
Sometimes the solution is easier based on the requirements, if evaluated
properly. For instance, in the case where two major sites required a total of
two servers to service the load and each site could be serviced by a single
server, the decision would be obvious from a technical perspective, which
would be to go to a distributed environment. However, note that this does
not take into account the cost of managing and maintaining the system,
which should be considered.
The basic nature of application servers in a VoIP and multimedia
environment naturally challenge the ability to deliver a high level of QoE.
Most application servers, due to requirements or architecture, break down
the end-to-end nature the Internet was built on. In the case of a Unified
Messaging system, it has to store and forward messages. In the case of
VoIP, it can potentially cause a double transcoding, which in turn increases
the demand for the network to be loss-free, error-free, minimal delay and
minimal jitter.
Both centralized and distributed approaches have their benefits, depending
on the requirements. However, a long standing computer paradigm has
been to use bandwidth instead of computing cycles whenever possible to
minimize the complexity of the system.
Chapter 23 Private Network Examples 581
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Unified Messaging
When designing a network for a centralized unified messaging platform,
much care should be taken into account on the architecture of the network
and of the unified messaging platform. Most VoIP networks today are
financially based on implementing the G.729 codec (CELP) for bandwidth
savings. This codec is known to be near toll quality performance, but that is
based on a single transcoding.
Many store and forward applications servers like a unified messaging
platform, which may introduce an anomaly called multiple transcoding.
This is where the message will be transmitted across the network at G.729
and transcoded back to G.711 when it hits the application platform and
then is stored in another compression mode. When the message is picked
up, it may be compressed back to G.729 for transmission on playback.
Depending on the quality of the network, voice quality may degrade
substantially.
In this case, you will want to design the network to be able to either record
or playback at G.711; thereby, eliminating a multiple transcoding. Another,
solution is to store the message in the original algorithm or as an RTP
stream. However, this is in the domain of product enhancements and these
solutions still have a number of limitations and generally are not
implemented well at the time of this writing. The purpose of this discussion
is to shed light on some implementation issues and potential solutions to
unified messaging architectures, and not to discuss the relevant advantages
and disadvantages of unified messaging architectures.
Unified Messaging
CallPilot is a unified messaging tool that utilizes speech recognition and
TCP/IP digital networking to give complete access and total control of fax,
e-mail and voice messages. Using simple voice commands, like play or
print, a user can remotely manage their multimedia communications over
the telephone. The user can print faxes, store or delete voice messages and
more just by speaking.
Call Center
As discussed before, call centers are generally customer facing and key to a
companys success. Therefore, IP Call Centers need to be implemented
with the highest priority for these VoIP flows. Additionally, they should be
made as robust and resilient as possible assuring that there is sufficient
bandwidth along with load sharing appliances to assure maximum
performance. A Call Center is a high candidate for a separate IP network
for the application to assure maximum quality.
It should be realized that once an IP network becomes congested, the TCP
algorithms will kick in and throw away packets to preserve the network for
high priority applications. However, this is very disruptive to real-time
582 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
protocols. One way to protect against this is to put critical VoIP network
requirements on their own network.
Call Center
Symposium Call Center is recommended as the call center solution.
Symposium is an industry leading solution that traditionally is considered a
centralized solution being associated with a single Call Server. The unique
configuration of the CS 1000B allows symposium to be implemented as a
distributed IP Call Center Application.
Multimedia
It should be noted that while multimedia applications are considered to be
real-time applications similar to VoIP, there are differences as originally
identified in ATM AAL service class analysis of applications that define
the attributes to be the timing relationship required between the source and
destination, whether the bit rate is constant or variable, and if the
connection mode is connection-oriented or connectionless.
Multimedia requirements are diverse in their bandwidth and timing
relationships; however, for the most part VoIP is the most demanding when
it comes to the timing relationship. Therefore, if voice is expected to be
near toll quality, it will require being given the highest priority in the
network over all other applications.
For instance, in a true multimedia call between users where VoIP, video,
instant messaging, application collaboration and FTP may all be happening
simultaneously, VoIP is the only service that if call quality is affected the
entire session is in jeopardy. Most times, minor degradation in quality of
the other services will generally be a mild distraction. Many times the
video can be turned off to gain bandwidth for other applications and turned
on only when applicable.
The MCS5100 Multimedia Call Server
Our VoIP offering is enhanced by multimedia services, which remove the
barriers of distance and location with applications including video
conferencing, instant messaging, collaborative whiteboarding, and
dynamic call handling. These applications are delivered by the MCS5100.
MCS5100 is designed as Multimedia Call Server that can provide
multimedia services as an overlay to any customers existing voice network
and a total voice and multimedia solution to new customer requirement.
MCS5100 can act as a standalone system providing a basic set of telephony
features available with the SIP protocol and can, additionally, overlay
existing PBX solutions with an option called converged desktop.
Chapter 23 Private Network Examples 583
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Converged Desk Top
Converged user desktop extends the value of advanced multimedia services
to end-users, while retaining the investment in existing digital telephones
and PCs.
Figure 23-16: Converged user desktop
When Converged Desktop services are added to an employees existing
telephone and PC, they can be utilized to provide advanced communication
to their desktop. Both the PC and the telephone act as if they are a single
device when providing advanced features to an end-usermaximizing
functionality while minimizing costs. Converged desktop has the following
features:
Calls coordinated with PC Client
Current Phone handles voice,
PC Client handles Multimedia
Events on one device cause status updates on the other
For example, with MCS5100, the calls are coordinated with the existing
digital phone and the Converged PC Client. The calls come in through the
trunks on the PBX and go through an SIP/PRI gateway to the MCS 5100
and are presented to the PC Client simultaneously as the phone is ringing
(see Figure 23-17). From an end-user experience standpoint, the phone
rings and provides voice servicesas it did before. The PC simultaneously
provides visual/picture caller-id of who is calling, as well as additional
information.
The phone still works like it did before; you have your choice of answering
the call using your phone/speakerphone or a PC-based headset. The PC
simultaneously provides collaboration services and features such as Call
Log, Instant Messaging, Whiteboard, file transfersall of which provide
additional features and capabilities without losing anything, or changing
the users desktopand is additive without any disruption or retraining for
users or maintenance people.
584 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 23-17: Converged Desktop
Collaboration ServicesInstant Messaging, Conferencing and
Whiteboarding
Collaboration focuses on organizations that put a great value on the need to
collaborate together, such as by using video and file-sharing. This is
especially helpful for far-flung, distributed workforces.
Figure 23-18: Collaboration
On a typical conference bridge, people dial in and the moderator has to
keep asking Who just joined the bridge? With instant-messaging, they
get a message telling them who just joined. On a traditional bridge, they
frequently have to track people down who are supposed to be on the call,
placing everyone on hold while they dial four to five phone numbers to find
someone. With instant messaging, they click on the persons name, and can
Current Phone
Multimedia
PC Client
SIP/PRI
Gateway
BayStack460
MCS 5100
Existing
PBX
Single User Single Experience
Multiple Coordinated Devices
Current Phone
Multimedia
PC Client
SIP/PRI
Gateway
BayStack460
MCS 5100
Existing
PBX
Single User Single Experience
Multiple Coordinated Devices
Conferencing, Instant Messaging, White
boarding
Conferencing, Instant Messaging, White
boarding
Chapter 23 Private Network Examples 585
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
send them an instant message that may appear on the users phone, PC,
pager, or whatever the users preference is currently set to.
The end-user should have the capability of connecting to the conference
bridge from PC soft-client, mobile phone, or office phone. The last stage of
a call usually involves discussing or sharing a document. Instead of
collecting FAX numbers and e-mail addresses, the conference chairperson
can send files, push web pages, and share a whiteboard application with
other conference participants.
A highly-mobile worker deals with coworkers in several different regions
globally. A worker should be able to communicate with others when they
are in the office using their phone and PC, or when they are at home using
their PC on a cable modem or DSL line, even from a hotel with Internet
access or a web terminal. Communications over the public network should
be protected utilizing secured, encrypted VPN technology.
If this were a Nortel solution, the MCS 5100 would provide Instant
Messaging, conference bridging, whiteboarding and follow-me voice
services.
Meet Me Conferencing
In addition to the base service ad hoc conferencing capability, an optional
Meet Me Media Conferencing application. Meet Me Media
Conferencing should require no reservations and utilize soft DSP
technology, which reduces the cost and footprint when compared to TDM
based in-house conferencing. Users access the service with a dial-in
number and passcode just like many used today with TDM-based
outsourced conferencing.
The service should support both a G.711 as well as a G.729 codec to
accommodate lower throughput networks or DSL access. For Enterprises
currently outsourcing their conferencing, Meet-Me Media conferencing
can produce immediate, significant savings.
If this were a Nortel solution, the MCS 5100 provides a highly scalable
audio conferencing solution providing visual notifications to the
Chairperson of conference activity and from an ROI perspective offers
significant savings to an Enterprise that may today be providing services
through a third party provider. Unlike most conferencing services, the
chairperson gets notification of participants entering and leaving the call,
so there is no more asking who joined during the middle of the
conversation. The solution will support point-to-multipoint video
conferencing by late 2004.
Personalized management
A personal agent is a web-based portal for accessing all of the advanced
features listed here. Customizable settings include all of your contact
586 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
information (Name, Phone Numbers, E-mail, Photo-ID). It includes an
address book of frequently called numbers and your Friends list, which
are the people that are people you work closely with. You can see the
Presence status of your Friends from your PC Client window.
Figure 23-19: Personal agent
Presence is a key feature of doing mobility well. Presence is the concept of
a system treating you as one user with multiple devices. Instead of multiple
phone numbers, addresses, and separate services, the user has a single set
of services, coordinated across their multiple devices.
The Automatic Presence feature can be enabled if you want so if you dont
touch the keyboard or mouse for a selectable period of time, your presence
status is changed to offline; then, all the people that have you as a friend
will see that you are offline.
For example, if a manager has a little phone next to their status icon, then
you know they are on the phone and your calls will go to voicemail. If you
require a quick answer to an easy question, you can send an IM and they
can get your answer right back. This allows you to get quick answers to
questions while you are still on the phone with the person who asked the
question, which reduces multiline phone interruptions while on the phone
and reduces the number of voicemail messages that need to return.
MCS 5100 Call Manager allows preferences to be set on how to handle
calls. Though an administrator can set defaults and get in there if they need
to, this is designed to be set up by the end user. MCS 5100 Call Manager is
very intuitive and easy to use. Different profiles can set up, based on
whether you are in or out of the office, allowing calls to be routed
differently. For instance, different treatment to calls are allowed coming
from your family to get to you wherever you areor perhaps automatically
straight to Voice Mail.
Utilizing the presence concept, the MCS 5100 provides you a visual
indication of the status of your close contacts (that is, Friends). Time can
be saved by looking at their status and availability, helping to decide the
best way to communicate with them at any given time.
Personal Agent
Flexible Access
Personal Agent
Flexible Access
Chapter 23 Private Network Examples 587
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Custom applications
Custom applications are the abilities to take a multimedia system and
leverage the advantage of being a standards-based solution while
leveraging killer apps (applications) that may be developed by a third
party. Custom applications are about taking a feature set like MCS 5100
and customizing it for specific applications.
Figure 23-20: Custom applications
The MCS 5100 platform is not a completely closed environment; it can
host a variety of multimedia applications. Nortel has a growing community
of developer partners working on providing a wide variety of applications
to various vertical markets, as well as customized development work for
specific organizations.
Different organizations and industries have different killer apps. MCS
5100 allows the creation of the ultimate killer app integrated fully with
the MCS 5100 communication platform. Applications development for the
MCS 5100 is standards-based, featuring a large number of APIs and
interfaces to include SIP, CPL, HTTP, JAIN, Parlay, VXML, SOAP,
CCXML, SALT. Upcoming product releases will feature developer tool
kits that Nortel authorized partners and customers can use to customize
their environment.
Conclusion
A major advantage of the Nortel solution is that an existing Nortel
customer does not have to implement a forklift strategy to take advantage
of new offerings and capabilities offered in the new IP space. Additionally,
new nonNortel customers are not necessarily forced into a full VoIP
solution if it is not required. As much of a compelling story that VoIP and
multimedia is, it is beginning to be realized that not all customer voice
requirements include the flexibility and mobility that VoIP delivers. While
VoIP is probably the ultimate solution, it requires a substantial investment
in the data network to provide and guarantee the same quality and
bandwidth that a traditional TDM system provides. Until data networks can
be built without constrained bandwidth and all devices are QoS aware,
many customers may find it easier to implement traditional voice systems
for many of their voice applications.
Programmability tools, interfaces & inter-op labs Programmability tools, interfaces & inter-op labs
588 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
When choosing VoIP, Nortel offers a number of different Call Servers
meeting different customer requirements and feature sets. Call Servers
should be located in the network to minimize delay and guarantee dial tone.
Additionally, Call Servers, Gateways and application servers should all be
placed in a Secure Voice Zone (SVZ) to protect against Denial of Service
and other security vulnerabilities that can affect their ability to deliver
service.
When implementing a VoIP network, QoS should be implemented to
assure the proper level of QoE desired. Ensure proper level of QoS through
use of QoS Tagging at either Layer 2 (802.1p) or Layer 3 (DiffServ),
implementation of VLANs (802.1Q), proper QoS mapping, proper network
design and sizing, and sufficient Network security.
It should be noted that call centers require much more bandwidth than
traditional VoIP applications because call center agents are virtually on the
phone all the time, sometimes handling multiple calls with more in the
queue. Additionally, they are generally customer facing; therefore, have the
highest demand for QoE.
Frame relay network connections require special consideration and
engineering due to their lack of QoS. To insure a high level of QoE in the
frame relay portion of the network, a full mesh topology should be
implemented with separate PVCs for voice. Additionally, all sites should
have a minimum of T-1 access (not FT-1) to assure the best QoE.
Nortel has a full line of voice and multimedia products that provide
solutions for voice and multimedia. They provide industry-leading
technology advancements, provide investment protection, and allow the
user to move into the new world of IP and mobility.
589
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Chapter 24
IP Television Example
Ed Koehler
Chris Busch
Figure 24-1: Transport path diagram
This chapter covers the issues and technologies for supporting real-time
unidirectional streaming video applications. IP-based television is
introduced and discussed as an example application for this type of service.
A high level overview of an IP-based television head end is covered with
particular detail in network level behaviors. IP multicast is examined
against the service level requirements for this application, using typical
CATV user experience as the baseline. From this, enhancements to
multicast technologies are covered that enable this application model to be
implemented. Aspects of content protection and conditional access for
these paid for services are also discussed.
MPLS
A
u
d
i
o
V
o
i
c
e
V
i
d
e
o
codec
RTP
R
T
C
P
R
T
S
P
S
I
P
H
.
3
2
3
H
.
2
4
8
/
M
G
C
P
/
N
C
S
R
e
a
l
-
T
i
m
e
C
o
n
t
r
o
l
UDP / TCP / SCTP
IP
ATM
AAL1/2 AAL5
FR Ethernet
SONET / TDM
Copper WLAN Cellular xDSL
DOCSIS
HFC
Q
o
S
R
e
s
i
l
i
e
n
c
y
Media Related
Session
Control
Gateway
Control
Media Session Related
Fiber
Packet
Cable
View From
Application
Perspective
To
Ntwk
MPLS
A
u
d
i
o
V
o
i
c
e
V
i
d
e
o
codec
RTP
R
T
C
P
R
T
S
P
S
I
P
H
.
3
2
3
H
.
2
4
8
/
M
G
C
P
/
N
C
S
R
e
a
l
-
T
i
m
e
C
o
n
t
r
o
l
UDP / TCP / SCTP
IP
ATM
AAL1/2 AAL5
FR Ethernet
SONET / TDM
Copper WLAN Cellular xDSL
DOCSIS
HFC
Q
o
S
R
e
s
i
l
i
e
n
c
y
Media Related
Session
Control
Gateway
Control
Media Session Related
Fiber
Packet
Cable
View From
Application
Perspective
To
Ntwk
590 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Unicast streaming video is also covered with a focus on Video on Demand
type services that may be adjunct to the IP multicast television distribution.
The reader will learn about the basic concepts of unicast streaming video
and how it differs from multicast. Both transport and media signaling
methods will be discussed in detail.
The chapter will wrap up with considerations to Video on Demand usage in
the QAM-based hybrid fiber-coax transport architectures that are typically
seen in the cable provider environment. A basic overview of QAM delivery
of VoD services is provided and how IP-based content insertion can replace
traditional ASI approaches.
Introduction
In todays carrier, service-provider, and Enterprise networks, the use of
advanced IP-based networks has become more commonly accepted as an
alternative to the more traditional modes of media transport.
The use of this technology for IP-based television has been gradually
increasing without fanfare in the industry at large. For some time, it has
enjoyed acceptance outside of North America, with early beachhead
deployments in both Europe and the Asia-Pacific Rim. Recently, however,
it has begun to generate increased interest from Local Exchange Carriers,
Regional Bell Operating Companies, and Internet Service Providers (ISPs),
as well as Metro Area Transport providers that are beginning to implement
Ethernet-to-the-User (ETTU) networks in residential high-rise
applications.
Enterprises are also finding that properly leveraging high-speed advanced
IP networks allows them to offset much of the cost of implementing a
traditional television headend and fiber/coax distribution system.
The requirements demanded by IP-based television are very stringent on
the IP network. Not only is there the requirement for IP multicast
capabilities in the network, but also the need for a robust and stable
deployment that has only recently arrived in the market.
In order to understand the improvements that had to be made, it is
important to review IP multicast from a generic perspective. This document
reviews the basics of multicast and then compares that generic working
architecture against the requirements of an IP-based television head end.
By comparing a generic IP multicast deployment against the application
requirements of the reference model, areas where optimization had to be
made can be highlighted.
IP multicast is an evolution in progress
The basic premise of IP multicast is to send out a single packet (or in the
case of video, a stream of packets) from a single source to multiple end-
points. This by itself would not seem very difficult and, from a high level, it
Chapter 24 IP Television Example 591
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
was not. However, achieving it in a scalable and stable manner, with the
necessary flexibility, proved to be more difficult.
In light of the fact that routers do not forward any multicast or broadcast
activity, a method was needed to provide Layer 3 links across the routed
boundary. All multicast routing protocols are methods to address this
requirement.
DVMRP is a good all-around multicast technology. While it does score
comparatively high from a network overhead perspective (a result of the
routing table update requirement inherent in vector-based routing
protocols), it scales relatively high as well and is well adapted to dense
mode networks. This is particularly true when DVMRP is implemented
with the right routing policies and features. The newer PIM-SSM,
technology, however, raises expectations by showing a promise of scale
beyond that of any other protocol.
There is another consideration that is equally important: IP-based
television is a single-source multicast model. The premise of the
application is to make a single stream (the television channel, which is the
equivalent of an IP multicast group) available to multiple viewers. Both
DVRMP and PIM-SSM are source-driven or reverse-path implementations
of multicast. For this reason, the source of the multicast activity is always
the root of the network tree, unlike a shared-trees approach such as PIM-
SM, which starts the build of the multicast tree from an independent root
known as a Rendezvous Point (RP). For these reasons, both DVMRP and
PIM-SSM are logical choices as Layer 3 routing protocols for IP television
headend implementations.
IGMP indications of signaling interest
Interest in a particular multicast group is indicated by an Internet Group
Management Protocol (IGMP) join request that is generated by the client.
The edge router sees the join request and joins the interface onto the
multicast group. If the router is running DVMRP, it will have a known
reverse-path route back to the site by virtue of the DVMRP table updates. If
it is PIM-SSM, it will have the unicast source address to reference for a
reverse forwarding path.
During the whole time that the client is viewing the channel, it is
generating IGMP membership reports (every 125 milliseconds by default)
for the Class D address. These membership reports enable the edge router
to keep the interface joined onto the multicast tree.
By default, the edge router prunes the interface after the explicit leave
message if it does not receive another IGMP membership report for the
multicast group until the end of two IGMP message intervals (400 ms,
assuming a Last Member Query Interval [LMQI] of 20 or 200 ms and a
robustness value of two). This improves the edge performance greatly, to
592 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
roughly one half second. The LMQI is the amount of time between the
group-specific queries, and the robustness is a factor of expected data loss
on the network (high values mean high data loss, and, therefore, more
queries will be sent).
By this coordination of the two protocol environments, IGMP and
multicast routing (DVMRP and PIM-SSM), the state of the multicast event
is maintained. Figure 24-2 illustrates several stations on a common Layer 2
segment. One station is leaving multicast group 228.1.1.1, but there is
another client on the segment who is sourcing a membership report. This
signals to the edge router that there is still solicited interest in the channel,
and the edge router does nothing but continues to serve the stream for that
group. There is another client that is leaving 224.1.90.5. Because the router
does not see any other IGMP reports for that group after the standard
interval, it is pruned off of the segment. A new station request, such as the
one for 224.1.1.1, will require a reverse-route Shortest Path Tree (SPT) set
up as a join to the multicast group. This will require some latency for the
build of the tree join. The farther away the client is from the sending
source, the longer the setup latency will be.
Figure 24-2: IGMP membership reports
It is useful to think of Layer 3 activity (between multicast-enabled routers)
as tree-based topologies that branch out from the sending source to the
network edge. It is also useful to think of the Layer 2 segments as clouds
of multicast activity, meaning that without some sort of multicast control at
Layer 2, all multicast traffic will be sent to all active stations on the
segment, whether they have solicited interest or not. Thankfully, Layer 2
Ethernet switches provide the perfect place to introduce Layer 2 IGMP
controls that, in essence, extend the tree concept into the Layer 2 clouds.
Note, however, that a Layer 3 forwarding device can also be located at the
Chapter 24 IP Television Example 593
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
edge, where the IGMP control is accomplished along with the routing
functionality at the same interface.
The IP television system
The IP-based television delivery system provides the video sources for the
service. From the perspective of the IP network, what results is the
presentation of a series of IP multicast sources (the television channels). As
a result, the IP video headend is a set of subsystems that work together to
provide the basic service of broadcast-quality television signals over an IP
multicast-enabled infrastructure. This service is defined as basic because
companies also offer other service extensions such as e-mail, web access
and VoD.
The typical speed of MPEG2 D1 resolution is around 3 Mbps
1
. It is
commonly accepted as the baseline threshold for IP-based television
media. Hence, 3 Mbps will serve as the channel speed in this discussion
and reference model
2
.
The IP television reference model
Figure 24-3 shows the network reference model that will be used for the
remainder of this discussion.
As illustrated in Figure 24-3, there are two subscriber access models. The
first is intended to bring 100 Mbps Ethernet directly into the residence.
This connection is then fanned out by a small residential Ethernet
workgroup switch, which provides connectivity to the IP STB, as well as
the subscriber's PC for normal Internet access. This connectivity model is
appropriate in metro high-rise applications, gated or clear boundary
residential neighborhoods, and corporate Enterprises.
1. Although some implementations (specifically those of ADSL providers) are looking to reduce this
requirement, others (like some cable applications) use more bandwidth for better image quality.
2. It should be noted that speeds as high as 6 or 8 Mbps are also used in Standard Definition Video.
594 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 24-3: The IP television reference model
In more rural areas, such a model is not yet practical. The use of DSL
technology maps well into this space if the right kinds of capabilities exist
in the provider's network. The more traditional distribution architecture for
DSL is ATM. In this scenario, there is not a lot of capability that can be
introduced into the DSLAM to optimize it for IP multicast. Consequently,
discrete VLANs over individual PVCs are run over the DSL line to the
subscriber residence. The last point of control is at the edge router itself.
While this is a workable design, it is limited in how well it scales. It also
requires DSLAM aggregation trunks greater than OC-3. While many new
ATM-based DSLAMs are coming out that meet these requirements, much
of the installed infrastructure is legacy OC-3 based, with very large
populations of subscribers attached.
A more recent approach is to actually place Ethernet or IP down the DSL
subscriber link, which has the advantage of avoiding the ATM scaling
problem.
See Appendix F for further details on the IP television head end system.
3
3. Please refer to the Disclosure Notice in the Section VI Introduction (page 498).
Chapter 24 IP Television Example 595
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Core switch/router feature enhancements for IP multicast
The first issue to be addressed was the channel setup time to register the
requesting client and then build the extension of the tree out to the new
viewer
4
. With DVMRP, the underlying methods of group propagation
(reverse path table updates) are far too time consuming. The workaround
was to introduce a static route (static forwarding entries) feature to
DVMRP so that the multicast groups are brought up and active out to the
edge of the core on a full-time, 24x7 basis.
In addition, a generic static receiver allows the configuration of static
groups on given ports, including ports connecting the Layer 3 switches
together for fast channel delivery.
In the reference model, the IP television headend is sourcing eighty
channels at 3 Mbps per channel. This sets the requirement for a full-time
multicast activity across the core at 240 Mbps of IP multicast traffic.
These simple feature enhancements avoid much of the network overhead of
DVMRP. By providing all eighty channels out to the edge of the core, each
channel is available immediately because the whole build of the tree across
the core from the sending source is avoided. These features are something
that is set up only on the edge switches, so the configuration is easily
replicated and implemented.
With DVMRP routing policies, user interface subnets are not advertised to
the rest of the network, allowing the protection of the network from
accepting any multicast traffic from these interfaces. Although these
interfaces are not advertised to the multicast network, unicast routing
policies are not impacted because DVMRP uses its own routes and route
exchange mechanism. Also, DVMRP default route policies allow for the
reduction of the routing tables to just a few routes. These enhancements
make DVMRP highly scalable.
The introduction of PIM-SSM avoids a great deal of the latency in the tree
build of by eliminating the use of reverse-path table update or some sort of
auxiliary source address discovery method (such as a SPT join from the RP
shared tree in PIM-SM). The reason for this is that the unicast address is
already known by the PIM-SSM edge router at the time the client requests
the channel. Consequently, the edge router can immediately reference an
independent unicast routing table and build the leaf immediately.
With route policies, the path each group uses out to the edge can be
preengineered, further assuring stability and predictability not only during
normal operations, but also during failure rollover scenarios.
The first enhancement for PIM-SSM is a feature known as PIM-SSM
static mode. In PIM-SSM static mode, a table is maintained in a switchs
4. The requesting client, and build the extension of the tree out to the new viewer.
596 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
forwarding table, supporting the service that holds S, G entries for all PIM-
SSM groups. This means that the sending source no longer has to provide a
source-specific advertisement because the edge core switch connecting to
the client will have a known unicast IP source address for the given group
in its PIM-SSM static table
5
.
Figure 24-4: Core IP multicast enhancements
Another aspect to be considered is that the reserved addressing space (232/
8) for SSM is configurable. Whether the operation is in static or dynamic
mode, the address range needs to still be configurable. This provides the
flexibility to use IGMPv2 and IGMPv3 with any defined range of multicast
addresses outside the predefined SSM range As a result, a user can take an
already addressed IP multicast environment (225/8, for example) with
IGMPv2 and retrofit it into a single-source mode deployment. This allows
for the user to reap the benefits single-source technology without the need
to upgrade and redesign.
Summary of Core Switch/Router Features to efficiently support IP
Television:
5. To ease the configuration task of such tables, a centralized management system usually allows bulk
configuration for hundreds of switches with such entries in the Nortel Layer 2/3 core switch
implementations.
Chapter 24 IP Television Example 597
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
DVMRP static forwarding entries
DVMRP static receiver
DVMRP routing policies
PIM-SSM static mode
SSM configurable reserved addressing space (232/8)
Appendix F contains further details on the Core feature enhancements for
IP multicast.
6
Edge switch feature enhancements for IP multicast
Once the stability and robustness of the core implementation is addressed,
the next element to be considered is the network edge. As stated previously,
it is best to think of multicast activity at Layer 2 as a cloud. This means
that, by default, there is no control of multicast activity, which means that
the traffic is sent to every active port.
Because of this, some sort of IGMP control features are needed. These
features are IGMP Forward/Proxy, IGMP Forward/Relay and IGMP
Snooping. Together they allow for Layer 2 switch to perform the following
actions:
Identify which port is connected up to a multicast-enabled router
(port 5 in Figure 24-5).
Identify which ports are receiving IGMP reports for which groups
and thereby allow that traffic to be forwarded. All other ports are
not forwarding the traffic associated with that multicast group
because no IGMP report for that group was received from those
ports.
Forward to the edge router the IGMP reports that need to be seen
(such as the leave message of the last viewing client to a group) or
to block its forwarding (as in the case of where there are still
viewing clients for a given multicast group) because there is no
need to interrupt the edge router in this scenario.
6. Please refer to the Disclosure Notice in the Section VI Introduction (page 498).
598 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 24-5: IGMP snooping and forward/proxy
IGMP fast leave and LMQI residual streams
With the timing parameters that have been discussed so far, there is still a
problem that lurks in the details of timing. During channel surf, a
subscriber may traverse as many as ten channels in a half a minute. During
this time, the viewer will, of course, want to see whats on. Arrangements
have been made for this to happen as discussed with the core feature
enhancement. However, the router is still going through the standard time
interval (roughly one second) before the actual prune takes place. As a
result, during channel surfing, a series of residual or orphaned streams are
up and active but there is no real interest on the part of the viewer. The
prune for these streams has not yet timed out. If this happens with enough
channels and enough subscribers (or if there is even a single subscriber
with a comparatively narrow bandwidth profile such as DSL), some serious
problems can occur in the way of congestion.
In order to avoid this, a feature known as fast leave has been added. This
feature allows for the immediate blocking of the Layer 2 switch port and
the immediate forwarding of the IGMP leave to the router, where it acts
immediately to prune the interface from the group. The addition of this
feature provides a leave-prune time in the millisecond range.
As perceived by the human eye, this happens in an instant. In single STB
deployments or in deployments where each STB is on a separate VLAN,
the fast leave feature works well. In instances where multiple STBs share a
common VLAN, however, there is a quirk. If there are two STBs both
Chapter 24 IP Television Example 599
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
viewing the same channel and one of them changes to another channel,
both STBs will lose the channel. The second STB will source an IGMP
report for the channel and the edge router will join the port back onto the
event so the loss of video will be intermittent, but it could be for up to one-
half second. This is enough to cause issues. As a result, a feature known as
adjustable LMQI was developed, which allows for multiple STBs on a
single VLAN model. This allows for the fine-tuning of the LMQI value,
and its association with the IGMPv2 leave improves the handling of this
process.
In this scenario, when the first STB sends a leave, the Layer 2 switch
removes the STB port from the group. But the edge router still keeps the
stream active. At this point, the Last Member Query Interval (LMQI) timer
is started at the edge router. During this time, the edge router listens for any
report activity for the multicast group, following the IGMPv2 leave
process. In the case of the above example, STB2 will source an IGMP
report according to standard interval (125 ms). When the switch sees the
report, it continues to serve the stream and STB 2 does not experience a
service interrupt. If the LMQI were to expire and no reports were received,
the stream would be deactivated. Figure 24-6 illustrates these features.
Figure 24-6: Channel Surfing
600 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Bandwidth control and TV channel limitation
It is important to provide a way to prevent users from watching more than a
certain number of channels at one given time, the premise being to limit the
number of TV channels allowed to flow to a user interface to a predefined
but configurable number. For example, a user with two TV sets should be
able to get two TV channels simultaneously and is not allowed to get a
third channel. This allows the service provider to control the bandwidth
usage; in addition, preventing users from attaching more than the allowed
number of TV sets to a given link.
A provider might also choose to allow an additional number of TV sets for
a given subscription and then have the flexibility to charge additional fee
for more TV sets. Also, as part of this control, it is essential that when any
violation of the allowed number of channels is detected, a notification
(trap, for example) should be sent to the management station to inform of
this violation. This capability is based on tracking of IGMP leaves and
joins per user interface in order to count the number of flowing streams on
a given interface for a specific user.
In the case where every user has a dedicated VLAN, a different model
needs to be considered. In this case, the bandwidth is equal to the number
of users times the bandwidth per channel, even if they are all watching the
same channel, because each user has their own edge-routed port.
Appendix F contains further details on the bandwidth factors at the network
edge and conditional access and content protection methods.
7
Video on demand service extensions
Video on Demand is often a tandem requirement to multicast video or
television distribution. At first light, it might seem as if the issues
surrounding deployment of an effective video on demand offering would
be easier and less stringent than for that of multicast. This, however, is not
the case. This section of the chapter will investigate the nature of video on
demand as a service and the systemic requirements and design
considerations that need to be made.
Unlike multicast video distribution, which is either delivered in raw UDP
or a lightweight RTP/UDP stream (RTP for Real-Time Protocol), video on
demand requires ancillary dialogs for request and control of the media
stream. The standards-based method for this dialog is a protocol set known
as Real-Time Streaming Protocol (RTSP) as specified in [RFC 2326). The
basic method for RTSP, which uses port 554, is to provide for the facility to
request describe, setup and play directives for media from the requesting
client to the VoD server. Once the session is established, the media is
delivered on a separate port dialog using RTP over UDP. (This is the usual
7. Please refer to the Disclosure Notice in the Section VI Introduction (page 498).
Chapter 24 IP Television Example 601
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
case. There are methods to use the RTSP port connection as noted below.)
The figure below illustrates these dialogs.
Figure 24-7: Illustrated RTSP session flow
Note: The media streams can be other protocol formats than RTP
over UDP. RealNetworks implement Real Data transport Protocol
(RDP) as the media stream protocol. Also, there are methods of
delivering the media stream in-line over the RTSP/TCP connection.
In the diagram above, the RTP media stream is illustrated by the Packet
arrows going from right to left (server to client). Below this, arrows are
shown indicating reports going from left to right (from client to server).
This dialog is facilitated by Real-Time Control Protocol (RTCP), which
provides feedback from the client to the server regarding latencies, packet
sequence and loss. Some decoders are equipped with the ability to repair
the incoming stream by inserting double frames or blanks and also
providing reordering of the packets prior to decoding. Note that these
features are application/vendor specific and not part of the RTP/RTCP
specification.
Another point to note is that some media streams, usually web streaming
applications, will deliver separate audio and video streams. In these
instances, RTSP calls up two sets of RTP/RTCP ports, one for audio and
the other for video, where the RTP port is usually called up from a dynamic
default range, again vendor specific, but must be an even number port. The
corresponding RTCP port will always be n+1, the value of the RTP port
call. This means that the RTCP port value is always odd. This behavior is
according to RFC standard (RFC 1889).
So as an analogy, RTP/RTCP might be considered as the VCR tape while
RTSP is the actual VCR control portion of the VCR unit. It should now be
obvious to the reader that the viewers experience is not only related to the
delivery of the media but the performance of these ancillary protocols as
602 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
well. Consequently, when considering quality of service metrics, it is
generally recommended to provide a service class to the signaling protocol;
that is, at least equivalent to the service class of the media stream.
Another aspect of video on demand that is different from multicast is that it
is a dedicated unicast stream between the server and the client.
Consequently, a different traffic demand is exhibited on the network. In
multicast, the burden is on the devices that perform stream replication.
These devices are the L3 switches and routers that are multicast enabled.
Here, the burden is in CPU and memory load regarding how much state
information the device can effectively process without significant packet
loss to the streams they are servicing. The traffic burden is conversely light,
however. A 3 Mb/s stream at 100 viewing clients is still a 3 Mb/s stream at
the server or encoder source. This is not the case with video on demand.
In video on demand, there is no burden of stream replication; however, 100
viewing clients watching 3 Mb/s streams equates to 300 Mb/s at the server
source. This is the case even if all viewing clients are watching the same
movie! This is something referred to as bandwidth aggregation. This is a
symptom of VoD sessions because, as we pointed out earlier, it is inherent
in the service traffic characteristics.
One can invest in a bigger server (or even create a load-balanced server
farm) and provide faster network interfaces on it but, unless there is
consideration for the aggregation of the bandwidth service requirement, a
certain amount of oversubscription is bound to occur. This is typically how
such services are handledwhere some portion of the subscriber base is
planned for using the service at a given point. Anyone else is out of luck for
that period of time (the length of the shortest VoD playtime that is left from
the time of service refusal).
The other option is to over-provision the network to allow for the
bandwidth aggregation. While these approaches work for small networks
and small subscriber bases, both fail as the scale of the network or the
service increases. As an example, a 7,000 subscriber base would equate to
a bandwidth aggregation factor of 21 Gb/s. Overprovisioning for the
bandwidth is clearly out of the question for all but the largest providers.
In the case of undersubscription, a problem raises its head in raw numbers.
With 100 users, oversubscribing at 50% means that fifty people might
potentially desire to use the service and are denied within the average
program run (ninety minutes for most movies). With 7,000 subscribers, that
same approach would result in 3,500 subscribers that could experience
service refusal. The laws of probability work here in that as the
oversubscribed base increases, the likelihood that service refusal will be
experienced increases as well because the delta of the average program
(ninety minutes) run does not change. In essence, we are comparing the
probability of fifty subscribers for an average program run or 3,500
subscribers for that same period of time for demanding the service. This is
Chapter 24 IP Television Example 603
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
the case even though the service provider has already allocated 10.5 Gb/s to
figure for the 50% of the subscriber base. Clearly, there needs to be another
option for VoD deployment.
All of this is symptomatic of centralized serving of the video on demand
service. The difficulty with the centralized server approaches is that they
are prone to these issues. These are in fact the tractable limits that such an
approach has inherent in it. Distributing the server process out to edge
provides a much more scalable method for video on demand services by
doing two things. By distributing servers out closer to the viewing base, the
bandwidth aggregation factor is reduced in relation to the reduction in the
size of the subscriber base served.
As an example, if the 7,000 subscriber networks were served by ten servers
instead of one, the subscribing base to each server would be 700. Now a
bandwidth aggregation factor of 2.1 Gb/s is the result with a 100%
overprovision. By oversubscribing 50%, the bandwidth factor becomes a
manageable 1 Gb/s and only 350 subscribers are thrown to the wolves with
the possibility of service refusal during the average program run. Now lets
multiply the number by four, so that the 7,000 subscriber network is served
by forty VoD servers. This means that each server will provide streams to
175 subscribers. This results in a bandwidth aggregation factor of a quite
manageable 525 Mb/s. In this scenario, the service provider can now easily
provision for 100% of the viewing base; thereby, resulting in a true real-
time video on demand service offering.
The second aspect to consider is that distribution of VoD servers out
towards the edge of the RTSP control flow loop is likewise shortened. This
improves user experience as well as reduces the burden on the server
because of the reduced subscriber base that the servers are supporting. The
figure below shows a network topology. In this figure, the VoD servers are
placed out at the topology aggregate to the subscriber edge. As shown, the
viewing population has been reduced as a result to this as well as the fact
that the video on demand traffic burden has largely been lifted off of the
core of the network; thereby, freeing it up for multicast video as well as
other real-time services such as VoIP.
604 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 24-8: A distributed video on demand deployment
Note that there are now two types of media content paths. The normal
RTSP streaming delivery is provided by the edge VoD server and is
illustrated in the figure above. The second type of media content path also
illustrated in the figure is a server-to-server dialog for the positioning of
content out to the edge servers.
The distribution can be accomplished in a number of methods ranging from
100% prepositioning of the content to an active proxy method that pulls the
content off of a central repository server and serves it from the edge while
caching it temporarily for any other interested viewer. After a period of
time, the content would then be purged from the edge server to make room
for newer content.
Usually, a given service would use both or a range of methods. Hot releases
for instance would be completely prepositioned at the edge so that
maximum subscriber exposure and availability is maintained for that
content. Other content might be mainstream contemporary releases that
still have a degree of popularity. This group of content might be partially
prepositioned, also known as prefix caching. Prefix caching allows for the
prepositioning of a given percentage of the media content. An
administrator could have several ranges of content, some at perhaps fifty
percent; others could be at lower increments such as ten or twenty percent.
For these groups of content, the RTSP stream delivery is served from the
edge server out of the prefix while the rest of the content (the suffix) is
pulled down from the central repository via the media positioning path.
Chapter 24 IP Television Example 605
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Any other requests for that piece of content would be served from the edge.
After a given period of time, provided no one else requests the content, the
suffix would be purged from the edge server, whereas the prefix would
remain cached at the edge for the next request.
The distribution of content is not in itself sufficient to address the complete
demands of the clients request for media. There needs to be a way to
provide for request routing of the clients request to the appropriate video
server. When a video asset is published into the video server system, a
series of metadata is typically created. This metadata may provide content
description and duration as well as stream speed and appropriate player
call. One of the other metrics that are created is a URL for the content
request. By providing an RTSP redirection to the local edge server for each
VoD request, the previous content distribution model would be leveraged.
Appendix F contains further details on the Web Streaming methods and
practices.
VoD deployment in cable MSO
Cable operators are aggressively working to turn VoD service up in their
networks to compete against Satellite operators. The Cable MSO network
is being driven to everything on demand model with enhanced
information services by its subscribers. Old mediums of transport for VoD,
such as AM-QAM and DVB-ASI, are complex to operate and manage and
lack the ability to scale and deliver the next generation of on demand
challenges facing Cable Operators.
Video on Demand service in a Cable MSO comprises several elements to
form a service offering. The Primary Hub or Head End in the case of a
centralized service approach will be the repository and contribution point
for all Video On Demand assets. In a decentralized model, assets will be
kept at the Primary Hub or Head End as well as at one or more Distribution
Hub(s). The distributed assets may be updated across the back office
network of the Cable MSO, or manual asset distribution may be performed.
606 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure 24-9: Video on demand functional components
What is VoD in cable?
To a Cable MSO, VoD is a unicast, real-time Mpeg-2 broadcast quality
video service. VoD is delivered via the MSO transport network to the
served Hub where it is delivered into the Hybrid Fiber Coaxial network for
last mile service to QAM Set Top Box.
Cable MSO VoD is not streaming, cached or time-shifted.
VoD bandwidth
To understand bandwidth in a Cable MSO network, we will describe how
bandwidth is determined in the last mile; therefore, how services are
constrained for VoD.
A 6 MHz North American Cable channel modulated using 256 QAM
yields 38 Megabits of derived bandwidth given the modulation chosen.
Therefore, with Mpeg-2 encoding qualities, it is proper to assume each
channel contains ten VoD serviceseach service equal to 3.8 Megabits.
Chapter 24 IP Television Example 607
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Digital cable headend
The chosen Digital Cable headend determines to a large extent the use of
equipment throughout the Digital Cable network. An example of this is
Scientific Atlanta* (SA), which manufacturers headend equipment for the
control and management of Digital Cable network resources. A Digital
Network Control System (DNCS) is used in SA headends to enumerate set-
top boxes, control encryption tokens, and assignment or arbitration of
narrowcast segment channels. The SA approach is to require the use of
their Modular QAM (MQAM) product at the edge of the distribution
network facing the customer settops by controlling the encryption key
algorithm from the headend with the MQAM and not releasing the
algorithm to competitors. In so doing, SAs DNCS is the sole device in the
network responsible for all resource assignment in the edge of the network.
Other Digital Headends exist in North America and Europe, such as
General Instrument* and Zenith*. Generically, a Session Resource
Manager will arbitrate all the functions of a DNCS from SA for these other
systems. These non-SA systems do not employ the same encryption
methodology in the delivery of digital video assets.
The industry belief in Cable is that theft of service costs relatively less than
the complete prevention of theft. For VoD, a DNCS or Session Resource
Manager acknowledges per subscriber assets billing; therefore, VoD as a
new service offer is quite secure.
MQAMs take ingress DVB-ASI MPEG program IDs and modulate each
ID present to a directed timeslot of an RF channel. These timeslots are
simply frequencies within the RF Channel used. As you will recall, in VoD
bandwidth we reviewed a 6 MHz RF Channel and how it supports ten VoD
streams.
The DNCS in the Headend is connected to all the served Hubs via an ATM
network typically implementing LANE (LAN Emulation on ATM). This is
accomplished via an overlay ATM-on-SONET network to every Hub.
Scientific Atlanta originally recommended Fore Systems* ASX1100 and
ForeRunner 200s for the Headend-to-Hub LANE setup. As a result,
typically Fore switches are found in Cable MSO networks strictly for the
needs of the DNCS communications.
Set-top box operation
Set-top boxes from Scientific Atlanta use PowerTV* Operating System to
interact with both the network and end subscribers. PowerTV OS is
licensed to several set-top manufacturers and is currently a predominate
player in QAM-based settops.
When a Power TV settop boots up, it requests its inband IP address, which
is invisible to the subscriber. This is handled by Bootp. In newer settops,
support for DHCP has been supplied. Once the settop has an address, it
608 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
requests its image download. The settop IP network is a single Layer 3
nonrouted domain in which the DNCS, in the case of a Scientific Atlanta
setup, shares a connection. The DNCS listens for all new set-top box
initialization requests. DNCS will arbitrate enumeration of settops to the
network, as well as direct them to a Token Encryption Device (TED) for
smart card security key.
DNCS, after enumerating the settop to its device table, will send the settop
its Electronic Program Guide (EPG) and any additional APIs, such as VoD
specifically written for a certain type or revision of settop to ensure support
of trick play operations (pause, fast forward, and rewind) with the VoD
server.
The settop uses the Data Channel Gateway, often referred to as an Return
Path Demodulator (RPD) in an SA system to provide two-way real-time
control and signaling support to the set-top box. It basically takes data from
the Fore ATM LANE network and converts it into the DAVIC out-of-band
channel format used for communicating with the set-top box over the HFC
media.
The upstream path between the set-top box and the headend is provided by
the DCGs MAC interface, which assigns timeslots to each set-top box that
wants to communicate with the headend. The physical channel frequency
between the DCG and the setup box lies between 5 and 42 MHz (QPSK
modulated channel).
This path would be used for any communication between the set-top box
and the DNCS (that is, all DSM-CC messages would go through this path,
the DCG would terminate the QPSK channel and provide and an L2 or L3
IP interface to the remaining network).
Cable network communication paths
The Cable network uses modulation in several frequency ranges to enable
downstream path and return path communications for several facilities.
The downstream facilities that a Cable network supports are as follows:
Set Top Loader Path
Analog Broadcast Contribution Vestigal Side Band
Digital Broadcast Contribution QAM 64/256
Cable Modem QAM 64/256 Docsis or nonstandard Cable
Modem
Digital QAM Narrowcast Contribution QAM 64/256
TDM Voice (Cornerstone Voice)
The upstream facilities that a Cable network supports are as follows:
Set Top control DSM-CC messaging
Chapter 24 IP Television Example 609
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Cable Modem QAM 16 Docsis or nonstandard Cable Modem
Out-of-Band FM Data Channels
TDM Voice (Cornerstone Voice)
VoD narrowcasting
A Cable network, as with any network, requires segmentation in order to
scale to the subscriber base and offer additional bandwidth efficiency.
Cable networks accomplish segmentation via RF combiners in the Hubs
serving subscribers. If 5-550 MHz of frequency is always present on one
coax feed, you can insert higher frequency ranges at an individual hub
for narrowcasted services.
Example:
Broadcast and Inband set-top services are always present and carried to all
hubs as the flat 5-550 MHz network. In the Hubs, we choose to add VoD
into the 550-750 MHz range.
We do so by contributing the services to the proper RF channels, then
combining them with the flat 5-550 network; therefore, delivering 5-750
MHz with the 550-750 range being unique to this hub alone. This method
of micro segmentation for a frequency plant is known as spatial reuse. The
term is appropriate as we reuse sections of the 600-750 MHz space for
every narrowcast segment created.
610 Section VI: Examples
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Summary
This chapter has covered all aspects of unidirectional video transport
utilizing IP networking technology. First, the multicast delivery of video
content was covered with a review of industry prevalent methods and
directions for multicast technology. Many optimizations need to be made to
the standard Internet Service Model (ISM) for multicast. Among these are
DVMRP and PIM multicast static routes, IGMP snooping and timing
optimization, as well as access control and management. The reader should
be able to discuss these enhancements and how they provide drastic
increases in the performance profile of the multicast service model for the
support of IP-based television services in a noncable network topology
environment. The reader should also be able to discuss dense versus sparse
mode multicast models, as well as single source modifications to sparse
mode multicast.
Aspects of unicast video were also discussed. First, standards-based Video
on Demand services that utilize RTSP/RTP transport were covered. The
mechanics of each protocol were discussed and how they relate to the
actual service from the user, client or set-top box perspective. Traffic
engineering concerns were also discussed and the different bandwidth
demands that Video on Demand services exhibit on the IP network when
compared with multicast video delivery. Centralized versus distributed
Video on Demand architectures were discussed with a comparison of
bandwidth demands for each.
Finally, VoD service offerings within the cable provider environment were
covered. Cable networking communication paths were discussed with
particular emphasis on Video on Demand. The reader should be able to
explain how IP-based VoD services are overlaid onto the QAM transport
that is used in the CATV network. The reader should also be able to
describe the initialization of the set-top box and how it is brought up on line
to the cable service offering.
611
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Appendix A
Additional Details about TDM
Networking
SONET/SDH hierarchy
Knowing how many voice channels fit into each level of the hierarchy can
be used to calculate the total payload of each system. Moving up the chart
in Figure A-1, for example, we see:
DS1 24 channels
DS3 24 channels X 28 DS1/DS3 = 672 channels
OC-3 24 channels X 28 DS1/DS3 X 3 DS3/OC3 = 2016 channels
OC-12 24 channels X 28 DS1/DS3 X 12 DS3/OC3 = 8064
channels
OC-19224 channels X 28 DS1/DS3 X 192 DS3/OC3 = 129024
channels
612 Appendix A Additional Details about TDM Networking
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure A-1: SONET /SDH Hierarchy
Stratum level clocks
At the apex of the clocking hierarchy are the most precise clocks, called
Stratum 1 clocks. The following table shows the defined Stratum levels,
their accuracy, their ability to synchronize to a higher level clock, and the
required stability of the clock. The table shows this information in two
forms: first the stability in terms of drift, and second the time that it takes to
accumulate 193 bit slips (one T1 frame). The latter measurement is where
the TDM system will experience data loss because when a frame slip
occurs, the TDM system will attempt to resynchronize itself to the data
stream and, in so doing, will lose all of the data bits during the resync
operation.
Appendix A Additional Details about TDM Networking 613
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Table A-1: Stratum clock hierarchy
Stratum 1 clocks can be used to control Stratum 2, 3E, 3 or 4 clocks.
Stratum 2 clocks may be used to control Stratum 2, 3E, 3,4E or 4 clocks.
Stratum 3 clocks may be used to control Stratum 3, 4E or 4 clocks. Stratum
4E or 4 clocks should not be used to control other clock sources.
DS1 Timing circuits
In the early days of digital telephony, precision clocks were very expensive.
Building a separate high-speed network to distribute clock signals would
have been prohibitively expensive. The distribution mechanism used to
distribute clock signals was, and still is, to a very large extent today, a
working DS14 rather than a DS1 from a dedicated timing network.
Stratum
Precision
1
1. Precision of 1 X 10
11
is equivalent to deviating by no more than one part in 100,000,000,000
Synchronization Stability (Drift) Minimum Time
between Frame Slips
2
2. The time between frame slips is the time required to accumulate 193 bit slips @1.544 X 10
6
bits per
second. It is a minimum time because it is a limit on the worst case drift. Note that the time to the first
frame slip may be half of this value.
1
1 X 10
11
144 days
2
1.6 X 10
8
+/ 1.6 X 10
8
1 X 10
10
/day
14.4 days
3E
1 X 10
6
+/ 4.6 X 10
6
1 X 10
8
/day
3.5 hours
3
4.6 X 10
6
+/ 4.6 X 10
6
3.7 X 10
7
/day
77 minutes
4E
32 X 10
6
+/ 32 X 10
6
32 X 10
6
6.7 minutes
4
32 X 10
6
+/ 32 X 10
6
32 X 10
6
6.7 minutes
614 Appendix A Additional Details about TDM Networking
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
615
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Appendix B
RTP Protocol Structure
The structure of the RTP packet is shown in Figure B-1. The following
paragraphs give a detailed description of each field.
Figure B-1: RTP protocol structure
Version (V) [two bits]: The field indicates the version of RTP used
to create the packet.
Padding (P) [one bit]: P is set when the packet payload contains
padding; that is, one or more octets at the end of the payload are not
part of the transmitted data. Padding may be needed when RTP
packets are packed together in a lower-layer PDU. Also, some
processes, such as encryption algorithms, require fixed block sizes.
Extension (X) [one bit]: If X is set, the fixed header is followed by
exactly one header extension.
CSRC count (CC) [four bits]: The field indicates how many CSRC
identifiers are contained in the CSRC identifier field. This field has
a nonzero value, only if the packet has been processed by an RTP
mixer.
Marker bit (M) [one bit]: Setting M indicates that a frame
boundary or other significant event is marked in the bit stream. For
instance, an RTP marker bit is set if the packet contains a few bits of
the previous frame along with the current frame.
Payload type (PT) [seven bits]: PT indicates the type of data
carried in the RTP packet. RTP Audio Video Profile (AVP) defines
M payload type
padding
sequence number
count
Payload (audio, video, )
.
Contributing source (CSRC) identifier
synchronization source (SSRC) identifier
timestamp
CSRC
count
X P V(2) M payload type
padding
sequence number
count
Payload (audio, video, )
.
Contributing source (CSRC) identifier
synchronization source (SSRC) identifier
timestamp
CSRC
count
X P V(2)
0 8 16 24
32
bit
U
D
P
p
a
c
k
e
t
616 Appendix B RTP Protocol Structure
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
a default static mapping of payload type codes to payload formats.
Additional payload types can be registered with IANA.
Sequence number [sixteen bits]: This value number increments by
one for each RTP data packet sent. Initially, Sequence Number is
set to a random value. The receiver can use the sequence number to
detect packet loss or to restore the packet sequence.
Timestamp [32 bits]: The timestamp reflects the instantaneous
sampling time of the first octet in the RTP data packet. The
sampling time must be derived from a clock that increments
monotonically and linearly in time to allow synchronization and
jitter calculations at the receiver. The initial value should be
random, so as to prevent known plain text attacks. For example, if
the RTP source is using a codec that buffers 20 ms of audio data, the
RTP timestamp must be incremented by 160 for every packet
irrespective of whether the packet is transmitted or dropped by a
silence suppression feature.
Synchronization Source Identifier (SSRC) [32 bits]: This field
identifies the source that is generating the RTP packets for this
session. The identifier is chosen randomly with the constraint that
no two sources within the same RTP session have the same value.
Contributing Source Identifier (CSRC) list [various]: The list
identifies the contributing sources for the payload contained in this
packet. The maximum number of identifiers is limited to fifteen;
having all zeros is prohibited in CC field). If there are more than
fifteen contributing sources, only the first fifteen are identified.
RTCP packet formats
RTCP defines five different types of packet formats defined to convey the
statistical values flowing between two endnodes to ensure QoS, including
the jitter computation, loss statistics, and synchronization information.
Sender Report (SR): used for transmission and reception statistics
from participants that are active senders (bytes sent, timestamps)
Receiver Report (RR): used for reception statistics (estimated
packet loss, interarrival jitter, round-trip delay) from participants
that are not active senders
Source Description (SD): carries additional information on a
source (for example, CNAME, e-mail, phone number)
Bye: indicates a source is leaving the session
Application Specific: functions specific to an application
RTCP packet structure is similar to RTP: each packet begins with fixed
length elements, followed by variable length elements. The number and
Appendix B RTP Protocol Structure 617
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
length of the variable elements depends on the packet type, but it must end
on a 32-bit boundary. The alignment requirement and a length field in the
fixed part of each packet are included to make RTCP packets stackable.
The means that multiple RTCP packets can be concatenated to form a
compound packet to be sent as a single packet of the lower layer protocol,
such as UDP. No separators are needed. An example of a compound RTCP
packet as produced by a mixer is shown in Figure B-2.
Figure B-2: Example of an RTCP compound packet
S
D
E
S
S
S
R
C
CNAME PHONE
S
S
R
C
CNAME LOC S
R
S
S
R
C
sender
report
S
S
R
C
receiver
report
site 1
S
S
R
C
receiver
report
site 2
B
Y
E
S
S
R
C
S
S
R
C
reason
packet packet
packet
compound packet
UDP packet
if encrypted: random 32-bit integer
S
D
E
S
S
S
R
C
CNAME PHONE
S
S
R
C
CNAME LOC S
R
S
S
R
C
sender
report
S
S
R
C
receiver
report
site 1
S
S
R
C
receiver
report
site 2
B
Y
E
S
S
R
C
S
S
R
C
reason
packet packet
packet
compound packet
UDP packet
if encrypted: random 32-bit integer
618 Appendix B RTP Protocol Structure
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
619
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Appendix C
Additional Information on Voice
Performance Engineering
This appendix provides additional details and discussion of jitter and the
jitter buffer.
A dedicated voice packet network in a steady state (that is, having
continuous voice calls, no silence suppression, no data load, and
congestion free) will have a quasi-static flow pattern. Thus, it will be no
packet jitter. There will be a distribution of delay for the various calls, but
delay will be invariant for any particular call. In a changing flow pattern,
where voice calls are being set up, cleared down, or where silence
suppression is in use, the changing instantaneous load vs. output link speed
will give rise to changing contention for the output link resulting in
homogenous jitter. When data is added (and associated forwarding classes
to prioritize voice and data traffic), the relative traffic load of each
forwarding class vs. the output link speed will give rise to changing
contention for the output link, resulting in heterogeneous jitter.
Figure C-1:Jitter as a function of the percent of maximum voice load, packet
size & link speed (bits/s). Voice and data traffic - 1 queues stat
mux, M/D/1G.711 voice packets of varying sizes.
620 Appendix C Additional Information on Voice Performance Engineering
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure C-1 shows the relationship between the loading (utilization) of a
link and the amount of jitter experienced on the delay. Note that lower
speed links have generally higher jitter at all values of utilization, and that
lower speed links also show inflated jitter at lower loading than do high-
speed links. This reflects statistical smoothing of the traffic for high-speed
links.
In networks carrying voice and data, jitter can be significantly reduced by
strict priority scheduling and proper load balancing. This will prevent
individual nodes from being oversubscribed. A good general rule is to keep
loading (utilization) below 90%. For many link speeds, jitter becomes
uncontrollable by about 90%. Avoid loading above 90% on low and
moderate speed links. Very high-link speeds tolerate higher loading. As
jitter is controlled, the better, since jitter buffer waiting time is reduced.
Network jitter
Network jitter refers to the jitter in a core network that uses generally high-
speed links (>= 10 Mb/s). Jitter is no longer significant above 10 Mb/s,
providing the post-90% loading asymptote is not reached. In situations
where the statistical multiplexer output link loading is less than 90%, jitter
is bounded to a few milliseconds and will have no or negligible impact on
voice quality. Where statistical multiplexer output link loading is not
controlled (that is, no admission control or under-provisioning,) loading is
unbounded (>90% > 100%) and, therefore, jitter is unbounded the delay
can rise asymptotically. Voice quality becomes unpredictable and unstable,
especially as packets are being dropped. For every 10 ms of additional
jitter, voice quality degrades by 0.5R @ Delay =150 ms, 1R @ Delay = 200
ms, 1.3R @ Delay = 250 ms (Delay is the mouth-to-ear delay one-way).
The potentially deleterious effects of homogeneous jitter and packet loss
are only adequately resolved by bounding the% load, using network
admission control, aggregate call admission control, and traffic monitoring
and placement.
Access/source jitter
Access jitter refers to the jitter in the access network that generally uses
low-speed links (< 10 Mb/s). As the data loading increases relative to the
voice, the probability that a data packet is in the process of transmission
increases, and the voice jitter increases, even with strict priority of voice
over data. For a given relative voice/data loading, as link speed drops, long
data packets take more serialization time, scaling the voice jitter. Jitter in
all low-speed packet access networks (cable, Enterprise, xDSL) can dwarf
network jitter in high-speed networks by several orders of magnitude.
Depending on the access link speed, the potentially deleterious effects of
heterogeneous jitter may be bounded by limiting data load, and segmenting
and/or preempting long data packets.
Appendix C Additional Information on Voice Performance Engineering 621
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Figure C-2: Jitter delay distribution on a congested link (> 90% average
voice load)
Figure C-3: Jitter delay distribution on a non- congested link (<90% average
voice load). On an appropriately loaded output link, late packet
clusters are less systematic, and of much shorter duration and
frequency. Jitter delay distribution is bounded to tolerable limits
and packet loss probability in limited size buffers is negligible.
Jitter buffer dimensioning
In Voice over IP (VoIP), a jitter buffer is a shared data area where voice
packets can be collected, stored, and sent to the voice processor in evenly
spaced intervals. Variations in packet arrival time, called jitter, can occur
because of network congestion, timing drift, or route changes. The jitter
Simulation conditions:
256K DSL links
24callson AAL2 Cu=1ms
G.729, 10ms with silencesup
J itter (delay variation) : 94 ms
Mean delay: 6.2ms
Standard deviation: 9.6ms
Variance: 92us
end-to-end
delay
Voice packet
Jitter: 94ms
Voi ce Call Durati on
I
n
s
t
a
n
t
a
n
e
o
u
s
s
o
u
r
c
e
j
i
t
t
e
r
d
e
l
a
y
(
s
e
c
)
Jitter distr ibution as a function of changing % voice load
Voi ce traffi c - 1 queues stat mux, E/D/1
10ms G.729 voi ce packets wi th si l ence suppressi on.
Simulation conditions:
256K DSL links
24 calls on AAL2, CU=1ms
Congestion control mechanism
G.729, 10ms with silencesup
Jitter: 3ms
Voi ce Call Durati on
I
n
s
t
a
n
t
a
n
e
o
u
s
s
o
u
r
c
e
j
i
t
t
e
r
d
e
l
a
y
(
s
e
c
)
622 Appendix C Additional Information on Voice Performance Engineering
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
buffer, which is located at the receiving end of the voice connection,
intentionally delays the arriving packets so that the end user experiences a
clear connection with very little sound distortion. The jitter buffer must be
designed for an expected profile. To do that requires deterministic
conditions.
Where the persistence of instantaneous arrival and the average arrival rate
do not exceed available buffer space, then no packets will be lost in
network routers and a receiving media gateway.
To achieve that condition requires that the average arrival rate does not
exceed a certain percentage (utilization/loading) of the outgoing link speed
of each multiplexing stage in the connection. For voice traffic with an
assumed uniform periodic profile at point of origin and a constant
bandwidth requirement, then that percentage can be as high as 90%
provided the traffic is all voice, or if shared with data then that voice has
absolute priority over data.
The buffer can then be selected to accommodate the expected longest
persistence of instantaneous loading above its average level of fill.
Under those circumstances, where voice packets originate, traverse and
terminate over links in excess of 10 Mb/s, one can expect the induced jitter
(resultant from the convolution of the behavior of the concatenated
multiplexing stages) to be of the order of a few milliseconds at most.
Consequently, a 10 ms jitter buffer should be sufficient. Jitter buffers can
and should be sized independently of a packet size.
Where those circumstances do not prevail (that is, network loading is not
controlled or bounded), there is little that can be said about determining the
correct jitter buffer size. Without a firm expectation of the instantaneous
and average traffic profile (that is, knowledge of the total traffic admitted to
the network and its load balance across the network), the probability of
unbounded persistence and uncontrolled average loading is increased, but
also unbounded. No size of jitter buffer in any router or receiving media
gateway can be considered big enough. One should always engineer a
managed network for well-behaved normal operation, with a sufficiency of
controls and monitors and a capacity suited to demand. In such
circumstances, a jitter buffer of a few milliseconds will have no negligible
impact on voice quality and fully absorb the packet delay variation of
packet switching.
Voice QoE = f (Conversation dynamics, Voice Signal Fidelity)
= f (Delay, Distortion)
= f (Delay, Speech Distortion, Sound & Echo level)
Where:
Distortion includes impairments due to compression coding, end
devices, and lost/late voice packets.
Appendix C Additional Information on Voice Performance Engineering 623
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Echo includes impairments due to hybrid inductive coupling (trans-
hybrid loss) and acoustic coupling in the terminal handset/headset.
TELR (talker echo loudness rating) is the parameter defining the
level of echo signal reflected back to the talker.
Loss plan includes impairments due to nonoptimal signal loudness
SLR (send loudness rating), RLR (receive loudness rating), CLR
(circuit loudness rating) and TELR (talker echo loudness rating).
Delay includes impairments due to propagation, processing and
packetization, queuing/jitter, switching. Delay contributes to echo
impairment, but is also impairment on its own when the total delay
become sufficiently high.
As we combined the various sources of delay, distortion, echo and signal
level, the determination of the voice quality becomes more complex and
less trivial. A standardized tool and method has been developed by ITU to
assist in the voice quality prediction. The ITU-T G.107 E-model is an
analytical tool for estimating the end-to-end conversational voice quality
across any network.
Figure C-4: QoE performance metrics
The E-Model combines all the contributing objective characteristics of a
connection (including signal level, circuit noise, sidetone, echo, delay,
jitter, and codec distortion) to obtain a prediction of the subjective voice
performance. It is based on a large body of subjective voice quality data,
which takes into account a wide range of impairments. Although the E-
model is based upon subjective voice quality data, the E-model remains an
objective method of predicting voice quality, and as such, is repeatable and
consistent (the same conditions will always return the same prediction).
Thus the voice Quality of Experience can also be expressed in terms of
R.
Voice QoE = f (E-Model Transmission Rating R)
1. Delay
Packetization/de-packetization
DSP/CPU processing
Propagation (distance)
Queuing (congestion, inadequate buffer size)
De-jitter buffer
Channel coding, interleaving, scheduling/poling
2. Distortion
Codec speech coding
Use of voice activity detector
Late/loss packets
Transcoding
Sound Level (Loss pads, terminal loudness ratings)
Echo Level (Echo control methods, suppressor, canceller)
E-Model Transmission Rating (R)
See ITU Rec. G.107
R summarizes the effects of network
impairments including delay, distortion, loss plan
and level of echo
[2]
Contributing Factors QoE Performance metrics
1. Delay
Packetization/de-packetization
DSP/CPU processing
Propagation (distance)
Queuing (congestion, inadequate buffer size)
De-jitter buffer
Channel coding, interleaving, scheduling/poling
2. Distortion
Codec speech coding
Use of voice activity detector
Late/loss packets
Transcoding
Sound Level (Loss pads, terminal loudness ratings)
Echo Level (Echo control methods, suppressor, canceller)
E-Model Transmission Rating (R)
See ITU Rec. G.107
R summarizes the effects of network
impairments including delay, distortion, loss plan
and level of echo
[2]
Contributing Factors QoE Performance metrics
624 Appendix C Additional Information on Voice Performance Engineering
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Transmission-rating R, the output variable of the ITU E-Model (Rec.
G.107), is a voice quality indicator. R can be used to predict and compare
conversational quality of voice calls on all types of networks.
Avoiding lost packets and missing data
In an uncongested network using the correct jitter buffer delay, there will
be no or negligible network packet drops. However, in congestion
situations, packets are lost and jitter is uncontrolled, leading to many late
packets. The best solution to late and lost packets is to engineer the
network to preclude or at least minimize the situations that cause delays
and packet loss. Networks can be engineered to keep the packet loss rate
low or even zero.
Some strategies for minimizing lost packets are described below. Some of
these need to be implemented on a network basis to operate; others can be
used on a single channel to improve the quality on an individual call. In
most cases, their availability in the network will depend on implementation
by the vendor, so the following is given as a shopping list rather than a
recipe.
Match link speed to traffic load. Links should have sufficient capacity to
carry peak loads. The Implementation Section addresses this and other
network engineering issues in detail.
Call admission control. In networks with a high proportion of voice
traffic, call admission control can prevent congestion by limiting the
number of calls that can be active through various nodes in the network.
Service classes ensuring expedited forwarding for VoIP packets. Giving
voice packets priority will reduce variability in packet arrival time, and
voice packets will not be deleted to relieve congestion. This is most
effective if the network is carrying a substantial proportion of data traffic. If
the network is carrying a high proportion of voice traffic, there may still be
queuing delays in the routers with associated increase in jitter and lost
packets.
Adaptive jitter buffer. An adaptive algorithm responds to packet loss by
increasing the waiting time used by the jitter buffer, and decreases it again
when the loss rate drops back. Adaptively changing the delay in the jitter
buffer can minimize late packets when the system is congested and avoid
adding unnecessary delay when it is not. If possible, adaptation occurs
during silent periods in the speech, so the temporal shift wont be audible.
Defining user level (QoE) and network level (QoS) performance
requirements
Perceived Quality of Experience also depends on user expectation. For
example, mobile users have different expectations about voice performance
than wireline users. Interactive conversation becomes impacted when delay
Appendix C Additional Information on Voice Performance Engineering 625
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
exceeds 150 milliseconds (see ITU G.114). Similarly, people have different
expectations for international/overseas calls than for local calls. To gauge
market acceptability, voice quality is best assessed from the perspective of
a known user experience. Where network providers offer reduced quality
levels for discount prices, these may be compared to the known user
experience associated with a lower quality; for example, a discount
wireline service might be benchmarked against circuit-switched wireless.
Since packet network is a replacement technology, the voice quality stakes
are the highest where user expectation is unchanged. Voice quality
planning needs to be related to the network it replaces; that is, the current
PSTN.
A five-step process has been defined based upon a top-down approach and
is shown in Figure C-5. Network level requirements are driven by end-user
application requirements following a top-down approach, as opposed to a
bottom-up, which we believe may lead to inadequate QoE performance.
Figure C-5: Process for determining user level (QoE) and network level
(QoS) requirements
Performance metrics and targets defining QoE for the different telecom
services are in different states of development. The requirements for
interactive voice services are more or less completely understood, while the
requirements for browsing and remote applications remain undetermined.
Voice service users are interested in experiencing clear, noise-free, and
echo-free conversations. All the parameters contributing to the QoE of an
ordinary voice call have been combined in an industry standard model
(ITU-G.107, the E-Model). In order to provide an estimate of voice quality
based upon The E-Model, a set of sixteen input parameters is required to
generate its output factorthe transmission rating (R). Some of these
parameters depend on underlying packet network behavior, and various
Define Service QoE Performance Metrics
Meet QoE
Targets
NO
YES
Validated = Services QoE
requirements are provided by the
QoS enabled solution
Identify Network Level Contributing Factors &
Dependencies affecting QoE Metrics
Define Service QoE Performance Targets
Determine QoS Enabled Network
Architecture Requirements and Configuration
Run Simulation and Analyze Network
& QoE Metrics Behaviour
Top/Down
Approach
User
QoE
Space
}
Network
Architecture
(QoS) space
}
626 Appendix C Additional Information on Voice Performance Engineering
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
methods assist in deriving estimates of these parameters using analytic or
simulation tools such as OPNET* Modeler.
Table C-1 shows the voice quality performance targets derived from known
users experiencing PSTN calls. These targets are what we should aim for to
provide equivalent quality to the PSTN network. Targets are presented for
both the A-side and B-side listener. Note that local and regional calls have
no ECAN. ECANs are active in National/International and mobile calls. It
has been determined that a difference of 3R is not noticeable by typical
users and, therefore, packet networks could be engineered within this
margin in order to provide an equivalent replacement technology. A
difference of 3-7R might be noticeable but most likely acceptable. Larger R
degradations (greater than 7R) are more likely to be noticeable.
Table C-1: Voice quality performance targets based upon known user experience. Note that
PBX have better loss plan and echo control resulting in better R. Also local and
regional calls have no ECAN active, contrary to national and International.
Appendix C Additional Information on Voice Performance Engineering 627
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
Table C-2:PSTN wireline conversational voice, voice band data and call control
performance
Quality metrics for voice: further details
Each of the characteristics previously discussed (and other, conventional
voice impairments, such as noise and harmonic distortion) can be measured
individually. However, it is useful to have an overall indicator of voice
quality. Various metrics have been devised to quantify the overall perceived
voice quality of a component or a system. Three common metrics are
discussed below: the subjective measure called Mean Opinion Score
(MOS); an objective MOS estimator called Perceptual Evaluation of
Speech Quality (PESQ) (pronounced pesk); and a computed metric
called transmission rating (R), which is calculated from objective
measurements of fifteen contributing parameters using an ITU standard
tool called the E-Model. Since most quality metrics are based on MOS in
R-factor delta R PMO-FMO < 3R
[10]
E2E delay 150ms 100ms ITU G.114, [3]
delay < 150 ms jitter 0ms see note [1]
distortion Ie < 3R packet loss 10
-3
10
-3
[2]
sound level OLR = 10dB
echo level TELR = f (D) [3]
Frequent Interruption 80ms (affects speech
intelligibility)
Frequent
Interruption
80ms
[4]. [8]
Infrequent interruption 3 sec (perceived as call
drop)
Infrequent
Interruption
3 sec
[9]
FAX page transfer time 30 sec per page jitter N/A N/A
G.1010
Modem N/A packet loss
10
-5
10
-5
[5], G.1010 [14]
session interruption N/A, handled by retrans.
T.30 FAX
6 sec T.30
T.38 FAX
not critical [12]
V.17 modem
50ms V.17
V.34 modem
70ms +/- 5ms V.34
V.90 modem
1.6 sec [13]
call setup time ITU standards E2E delay variable 20ms [7]
call tear-down time ITU standards packet loss 10
-3
10
-3
inadequately handled
call attempts
work in progress
interruption
duration
work in progress Q.543
[1] Should be engineered to meet E2E delay, 10ms is recommended - based upon Succession Voice Quality & Bearer Interworking Accreditation v3.3 (Blouin & Bruckheimer),
[5] Performance of Analogue Fax over IP (Brueckheimer)
[6] Based upon a 20000km international call, 100ms was allocated for propagation delay
[7] 20ms includes 10ms for propagation and 10ms for message processing. This budget allows for 4-6 messages @ 20ms to complete a transaction
[8] IEEE INFOCOM 2002 1 Perceived Quality of Packet Audio under Bursty Losses Wenyu Jiang, Henning Schulzrinne
[9] Impact of Network Outages on Voice Quality (F Blouin. L.Thorpe)
Conversational
voice
(CBR and VBR)
Performance
metrics
Path Interruptions Due to Failure
End-to-End
Core
Network
Path Interruption Due to Failure
Path Interruption Due to Failure
Call Control
Call Control
Voice Band Data
Path Interruption Due to Failure
Call Control
Network Performance Targets
[12] depends on the signalling protocol - H.323, SIP
Notes
QoE targets
Path Interruptions Due to Failure
Conversational Voice
[11] assumes most of core delay will be due propagation delay - should be budgeted to accommodate international calls 15000-20000 km
Conversational voice
[3] Succession_UA-AAL-1_VQ_ECAN_Planning_Report (F. Blouin, M Armstrong, R. Britt)
Path Interruptions Due to Failure
[14] 10-5 was ontained from 10-6 max BER in G.1010 and the packet size (20ms)
[13] not specified in the standard, implementation based, default is set to 1.6 sec
Services
User QoE Performance Targets
QoE metrics
[10] PMO: Present mode of operation, that is the existing infrastructure, most likely TDM. FMO: Future Mode of Operation, that is, the new packet/replacement solution
[4] J. G. Gruber and L. Strawczynski, "Subjective effects of variable delay and speech clipping in dynamically managed voice systems," IEEE Transactions on Communications, vol. COM-33, pp. 801--808,
[2] sufficiently low such as loss only occur randomly as packet loss concealment algorithms are not efficient on bursty losses
Voice Band Data Voice Band Data
628 Appendix C Additional Information on Voice Performance Engineering
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
some way (sometimes in name only), we will start with a taxonomy of
MOS before looking at the details of the three metrics.
Quality metrics are evaluated for individual connections. There is no single
value to describe a network: networks carry many calls, both simple and
complex, and the quality is determined by the access types, the transport
technology, the number of nodes the call passes through, the distance,
packet transport links speeds, and many other factors that differ from one
connection to another. To compare networks, specific connections
(reference connections) representing equivalent calling conditions are
defined that can be measured and compared.
Types of MOS. Mean Opinion Score began as a subjective measure.
Currently, it is more often used to refer to one or another objective
approximation of subjective MOS. Although all MOS metrics are
intended to quantify QoE performance and they all look very similar
(values between one and five), the various metrics are not directly
comparable to one another. This can result in a fair amount of confusion,
since the particular metric used is almost never reported when MOS
values are cited. To clarify the situation, ITU has defined distinct types of
MOS, as shown in Table C-3. This may seem unnecessarily complex, but at
least it reminds us that there are fundamental differences between
individual metrics and those numerical values are not necessarily directly
comparable just because they are both called MOS.
Key:LQ=listening quality; CQ=conversation quality; S=subjective; O=objective; E=estimated
Table C-3:Six types of MOS as defined in ITU Rec. P.800.1. Comparisons between scores
from different categories are invalid.
Subjective MOS. MOS-LQS and MOS-CQS are direct measurements of
user impression of overall voice quality, and as such are a direct measure of
QoE. Subjective MOS is the mean (average) of ratings assigned by subjects
to a specific test case using methods described in ITU-T P.800 and P.830.
MOS-LQS is obtained from listening tests, where people listen to recorded
samples and rate their quality; MOS-CQS is obtained from conversation
tests, where people talk over experimental connections and rate their
quality. Quality ratings are judged against a five-point scale: Excellent
(five), Good (four), Fair (three), Poor (two), and Bad (one). MOS is
computed by averaging all the ratings given to each test case, and falls
Source Listening-only Conversation
Measured subjectively MOS-LQS MOS-CQS
Measured objectively MOS-LQO MOS-CQO
Computational Estimation MOS-LQE MOS-CQE
Appendix C Additional Information on Voice Performance Engineering 629
Copyright 2004 Nortel Networks Essentials of Real-Time Networking
somewhere between one and five. Higher MOS reflects better perceived
quality.
Mean Opinion Scores (MOS) are not a measure of acceptability. While
perceived quality contributes to acceptability, so do many other factors
such as cost and availability of alternative service.
MOS-LQS (Listening Quality Subjective) and MOS-CQS (Conversation
Quality Subjective) are strongly affected by the context of the experiment.
1
There is no correct subjective MOS for any test case, process, or
connection. This is extremely inconvenient, since it means that it is not
possible to specify performance or verify conformance to design specs
based on subjective MOS.
PESQ (P.862). Subjective studies take significant time and effort to carry
out. MOS estimators (MOS-LQOs, where O is for Objective) such as
PESQ, were developed to provide bench tests of the voice quality of a
narrowband communications channel or device. PESQ and similar
methods
2
are quick to execute and precisely repeatable; however, they
dont provide an estimate of the conversational voice quality. In addition,
they dont provide much assistance in troubleshooting the cause of a
problem.
To perform a PESQ test, one or more speech samples are put through a
device or channel, and the output captured for analysis. PESQ computes
the quality estimate by comparing the input (reference) signal with the
output signal. The more similar the two waveforms, the less distortion there
is and the better the assigned score. To be sure the differences are real and
meaningful, the algorithm does some preprocessing to equalize the levels,
time align the signals, and remove any time slips (where some time has
been inserted or deleted).
1. Context refers to things like the order in which the test cases are presented in the experiment, the
range of quality between the worst and best test cases used in the experiment, and whether the
subjects are asked to do a task before making a rating. If an experiment is repeated exactly (with
different subjects), the similar scores will be obtained within a known margin of error. This is not the
case from one experiment to another. Consistency from test to test is found in the pattern of scores,
not in the absolute value of the scores. For example, the MOS-LQS for G.711 may be 4.1 in one study,
3.9 in another, and 4.3 in a third, but whatever the value obtained, we expect to see a higher score for
G.711 than G.729, and G.729 and G.726 (32 kb/s) to be about equal.
2. Many MOS-LQOs have been defined; aside from PESQ, the best-known are PSQM (Perceptual Speech
Quality Measure), and PAMS (Perceptual Analysis Measurement System). As the standard, PESQ
should be used in preference to the older measures.
630 Appendix C Additional Information on Voice Performance Engineering
Essentials of Real-Time Networking Copyright 2004 Nortel Networks
Figure C-6:Block diagram showing the operation of PESQ and similar
objective speech quality algorithms. A known signal is input to
the system under test, and the algorithm analyses the difference
between them by applying a model of human auditory
perception followed by a model of human judgment of
preference to arrive at a quality estimate.
Otherwise, such differences might influence the score beyond their actual
effect on the subjective quality. PESQ then applies perceptual and cognitive
models that represent an average listeners auditory and judgment
processes.
Raw PESQ output is not an accurate MOS estimate, so the scores are
usually converted to a Listening Quality score using one of several
available conversion rules.
PESQ measures only the distortion in the sound signal; it does not measure
conversational voice quality. PESQ does not account for listening level,
delay, or echo. As noted above, these are all important determinants of the
voice quality. Separate measures of these characteristics must be
considered along with a PESQ score to appreciate the overall performance
of the channel.
PESQ detects distortions and estimates the associated degradation, but it
provides no information about the cause. There are many potential sources
of distortion: added noise, multiple transcoding, late or lost packets,
corrupted speech data, and so on. This means that PESQ can identify that a
problem exists, but not diagnose it.
The system under The system under
test may be a codec, test may be a codec,
a network element, a network element,
or a network or a network
System under
test
Original
Output
Difference
Quality
Estimate
Perceptual
Model
Cognitive
Model