Você está na página 1de 6

Dimensioning of the LTE S1 Interface

X. Li, U. Toseef, T. Weerawardane, W. Bigos, D. Dulas, C. Goerg, A. Timm-Giel and A. Klug

AbstractThis paper presents analytical models to dimension


the transport bandwidths for the S1 interface in the Long Term
Evolution (LTE) Network. In this paper, we consider two major
traffic types: elastic traffic and real time traffic. For each type of
traffic, individual dimensioning models are proposed. For
validating these analytical dimensioning models, a developed
LTE system simulation model is used. The simulation results
demonstrate that the proposed models can properly estimate the
required performances and thus be able to be used for link
dimensioning for various traffic and network scenarios.

I.

INTRODUCTION

The roadmap of Next Generation Mobile Network


(NGMN) is to provide mobile broadband services. Services
like Mobile TV, multimedia online gaming, Web 2.0, and
high-speed Internet will produce tremendous traffic in the
future mobile networks. To make this happen, 3GPP
introduces a new radio access technology, known as Long
Term Evolution (LTE) to ensure the competitiveness of the
3GPP technology family for the long term. LTE supports
extensively high throughput and low latency, improved system
capacity and coverage performance.
LTE introduces a new air interface and radio access called
as Evolved UMTS Terrestrial Radio Access Network (EUTRAN), which is specified in the new 3GPP Releases 8 and
9. To support the LTE radio interfaces and the E-UTRAN,
3GPP also specifies a new Packet Core, the Enhanced Packet
Core (EPC) network architecture. This paper is only focused
on dimensioning the transport network of the E-UTRAN (i.e.
the LTE access network), which is based on IP. E-UTRAN is
designed to support high data rates, low latency, and hence to
bring improved user experience with full mobility. This is
achieved by introducing a new, fully IP-based flat architecture
with enhanced Node B (eNode B) directly connected to access
gateway (aGW). The eNode B (denoted as eNB in this paper)
is in charge of Radio Resource Management (RRM) decision,
scheduling of users, etc. The aGW provides termination of the
LTE bearer and acts as a mobility anchor point for the user
plane. The eNB is connected to the aGW with the S1 interface.
Between the eNBs the X2 interface is defined, which is used
to connect the eNBs with each other in the network. The X2

X. Li, U. Toseef, T. Weerawardane, and C. Goerg are with TZI-ikom,


Institute of Communication Network, University of Bremen, Germany (email: xili | umr | tlw @ comnets.uni-bremen.de, cg@c-b-g.de).
W. Bigos, D. Dulas are with Nokia Siemens Networks Sp. z o.o., Wroclaw,
Poland (e-mail: wojciech.bigos | dominik.dulas @nsn.com).
A. Timm-Giel is with Institute of Communication Networks, Hamburg
University of Technology, Germany (email: timm-giel@tuhh.de)
A. Klug is with Nokia Siemens Networks GmbH & Co. KG, Mnchen,
Germany (e-mail: andreas.klug@nsn.com).

interface is needed for the case of handover to forward the


traffic from a source eNB to its target eNB.
This paper is aimed to propose efficient analytical models
to calculate the necessary bandwidths for the S1 interface. The
objective of the dimensioning is to minimize the transport
network costs (for leasing IP bandwidth) while being able to
fulfill the QoS requirements of various services. In this paper,
we consider two fundamental types of traffic: elastic traffic
and real time traffic. Elastic traffic is generated by non real
time (NRT) applications and is typically carried by the TCP
protocol. Typical applications are Internet services like web
browsing and FTP. Real time (RT) traffic is associated with
real time applications, which are delay-sensitive and have
strict packet delay requirements over the transport networks.
Typical applications in this traffic class are VoIP, streaming or
video conferencing. In this work, for the dimensioning of the
S1 interface the defined QoS requirement for the elastic traffic
is the end-to-end application throughput or transfer delay
(which specifies the amount of data that can be transferred in a
certain time period); while for real time traffic the considered
QoS is the transport network delay, i.e. the end-to-end packet
delay through the S1 interface (called S1 delay).
In this paper, we propose two individual analytical
models for each traffic type for the dimensioning of the S1
interface to meet their individual QoS requirements. The
proposed analytical dimensioning model for elastic traffic is
based on the M/G/R-Processor Sharing (M/G/R-PS) model,
which characterizes TCP traffic at flow level and is often used
to calculate the mean transaction time or throughput for TCP
flows. For real time traffic, we propose simple queuing model
on the packet level to estimate the transport network delay
performance. Furthermore, we present how to apply these two
proposed analytical models for carrying out the bandwidth
dimensioning for the S1 interface. In this work, a LTE
simulation model is developed to validate the analytical results
from the proposed dimensioning models. This LTE simulator
models in detail the important LTE network entities, protocol
layers, required scheduling and QoS functions, etc.
The remainder of the paper is organized as follows:
section II gives an overview of the developed LTE simulation
model. Section III presents the analytical dimensioning model
for elastic traffic, and section IV presents the analytical model
for real time traffic. In Section V, the complete dimensioning
procedure is summarized. In section VI, the proposed
analytical models are validated by simulations.
II.

LTE SIMULATION MODEL

The LTE simulator is implemented using OPNET


simulation software. The developed LTE simulation model is
shown in Fig. 1. It includes all basic E-UTRAN and EPC
network entities. The main focus of this simulation model is

on the LTE access network.

Figure 1. LTE Simulation Model

Fig. 1 shows an example scenario with two eNBs, and a


number of IP routers between the eNBs in the E-UTRAN
network and also connecting the eNBs with the aGW. The
EPC user-plane and control plane network entities are
represented by the aGW network entity. The aGW includes
the functionalities of the eGSN-C (evolved SGSN-C) and
eGSN-U (evolved SGSN-U). The remote node represents an
Internet server or any node that provides the Internet services.
Fig. 2 shows the LTE user-plane protocol structure which is
developed within this LTE simulator. The protocols are
categorized into three groups: radio (Uu), transport, and enduser protocols. The radio (Uu) protocols include the peer to
peer protocols such as PDCP, RLC, MAC and PHY between
UE entity and eNB entity. The PDCP, RLC and MAC
(including air interface scheduler) layers are modeled in detail
according to the 3GPP specifications in this simulator. But the
PHY (physical) layer is not detailed modeled since our focus
lies on the LTE access network. However the effect of the
radio channels and PHY characteristics are modeled at the
MAC layer in terms of the data rates of individual user
performance. For the UE mobility, general mobility model
such as random directional and random way points are used.

Figure 2. LTE Protocol Structure (user-plane)

The LTE transport network is based on IP technology. The


user-plane transport protocols as shown in Fig. 2 are used at
both S1 interface and X2 interfaces. It mainly includes the
GTP, UDP, IP and L2 protocols. Ethernet is used as the layer
2 protocol for the current implementation. IP protocol is the
one of the key protocols which handles routing, security
(IPsec), services differentiation and scheduling functionalities.
The LTE transport network applies the DiffServ-based QoS
framework and it is established by connecting a number of IP
DiffServ routers between the eNBs and the aGW. DiffServ is

developed by the IETF [1], which defines the three most


common Per Hop Behavior (PHB) groups corresponding to
different service levels: Expedited Forwarding (EF), Best
Effort (BE), and Assured Forwarding (AF). In the LTE
transport network, each PHB is assigned to a transport priority
and has its own buffer in the transport scheduler. To serve
different PHBs, Weighted Fair Queuing (WFQ) scheduling is
used. The definition of WFQ discipline is given in [2]. Let wk
be the weight of the kth PHB queue and BW the total available
IP bandwidth. If there are in total N PHB queues and all
queues are transmitting data, then the kth queue obtains a
fraction of the total capacity BWk as calculated in equation (1).
It shall be noticed that if one priority queue is empty (i.e. not
utilizing its allocated bandwidth) then its bandwidth shall be
fairly shared by the other queues according to their weights.
(1)
wk
BW k =

BW

i =1

For modeling the end-user protocols, the standard OPNET


protocols such as application and TCP/UDP are used. They
are located at the remote Internet server and each UE entity.
Furthermore, the control-plane is not directly modeled within
the LTE simulation model. However the effect of signaling
such as their overhead and delays are considered at the
respective user-plane protocols upon specific requirements.
III.

DIMENSIONING MODEL FOR ELASTIC TRAFFIC

The elastic traffic is typically carried by the TCP protocol.


Due to TCP flow control, the rate of TCP flow adjusts itself to
adapt to the available bandwidth in the network. If TCP works
ideally (i.e. instantaneous feedback), all elastic traffic flows
going over the same link will share the bandwidth resources
equally and thus the system is essentially behaving as a
Processor Sharing (PS) system [3].
The M/G/R-PS model has become a popular approach for
dimensioning of different fixed (e.g. ADSL) and mobile
networks (e.g. UMTS). An introduction of the basic M/G/RPS model can be found in [3]. In [4], an extension of the basic
M/G/R-PS model was proposed which considers the impact of
TCP slow-start. In [5, 6, 7] the author applied the M/G/R-PS
model to dimension the Iub transport links in the UMTS
network for elastic traffic. In this paper, the M/G/R-PS model
is further extended for dimensioning the LTE S1 interface.
Special efforts are to use the M/G/R-PS model and extend it to
model both the LTE radio interface and the S1 interface using
Differentiated Service (DiffServ) QoS scheme; and based on
that we propose a framework for estimating the end-to-end
application performance of the elastic traffic, which consists
of the air interface model and the S1 model.
A. Framework of the Dimensioning Model
By analyzing the LTE system model, it can be concluded
that the end-to-end application performance is essentially
influenced by both air interface and S1 interface. They are the
two major bottlenecks through the end-to-end path. The air
interface determines the radio resource each UE can get. The
higher the air interface utilization, the lower will be the

average UE throughput as a result of the congestion over the


air interface. The S1 interface is the second capacity
bottleneck. A congested S1 link can result in significant
increase of the end-to-end transfer delay. Thus, in order to
estimate the end-to-end performance, we need to model these
two bottlenecks individually in the analytical dimensioning
model. Hence, the proposed framework of the dimensioning
model consists of the air interface model and the S1 model.
Given the traffic models and the number of UEs in the cell,
the air interface model calculates the maximum average UE
throughput but without considering any congestion in the S1
transport network. Then the S1 model will take the maximum
UE throughput obtained from the air interface model as an
input parameter and then estimate the impact caused by the S1
link on the overall end-to-end performance. In the following,
we will introduce the detailed modelling of the air interface
and the S1 interface individually.
B. Modeling of the Air Interface
The air interface scheduler will have important impact on
the achievable UE throughput. In this work, we consider the
case of scheduling all UEs in a cell in a round robin manner,
i.e. all UEs are equally served with the same priority. That
means, all elastic flows can share the common radio resources
equally and in this context the air interface can be modeled as
a Processor Sharing (PS) system. Furthermore, there is no
maximum bearer rate limitation for each LTE bearer (i.e. for
each individual flow). That means, one elastic flow can take
the complete radio resource for itself if there are no other UEs
active in the cell. In this case, the M/G/1-PS model can be
used to model the air interface, because the M/G/1-PS model
is defined for the situations where the flow rate is not limited,
i.e. each flow has the ability to fully utilize the whole capacity
when no other flow is present in the system [5].
Let CUu denote the cell capacity (in bps) and LoadUu be the
average traffic load in a cell (in bps). Here LoadUu can be
calculated from the given traffic models and total number of
active UEs in the cell. It is the sum of traffic of all services. As
a result, the average utilization of the air interface is calculated
as pUu = ( LoadUu / CUu ). Based on the sojourn time formula of
the M/G/1-PS model, we can derive the delay factory fUu with
equation (2). The delay factor is larger or equal to 1. It
quantifies the increase of the transfer time (or decrease of the
effective throughput) of individual flows as a result of the air
interface congestion. It is noted that when the air interface
utilization pUu is higher the delay factor fUu also becomes
higher, which implies that the application delay (or file
transfer time) will be increased. It is noted that here the delay
factor fUu for elastic traffic also considers the traffic load of
real time services. As real time traffic contributes to the total
traffic load and also shares the available radio resources with
the elastic traffic, and thus it as well has impact on the total
congestion and influences the application performance of the
elastic traffic. Therefore, we also need to take the real time
traffic into consideration when estimating the end-to-end
performance of the elastic traffic in our model.
(2)
fUu = 1 /(1 pUu )

With fUu we can derive the average UE throughput r as a


result of air interface utilization in equation (3). It is noted that
r is only limited by the air interface capacity, assuming that
there is no congestion through the transport network (given
sufficient capacity). If the air interface capacity is fixed, r
represents the maximum average UE throughput given an
ideal transport network. Thus, in the next step we take r as the
peak UE data rate for dimensioning the S1.
(3)
r = CUu / fUu
C. Modeling of the S1 Interface
As shown in section II, the LTE S1 transport network is
based on IP using DiffServ QoS framework together with the
WFQ scheduling. The main idea of the proposed analytical
models for dimensioning the S1 link is to apply the M/G/R-PS
model per PHB class (i.e. per transport priority), while taking
the potential multiplexing gain of bandwidth sharing among
different PHB classes into account. The basic model for
dimensioning an IP-based transport link in the UMTS
networks, which deploy the IP DiffServ QoS structure, is
presented in [7]. This paper will further extend this basic
model to model the LTE S1 interface and also to capture the
detailed TCP slow start behavior.
For the analytical model, let CS1 be the S1 bandwidth. For
each PHB class, we define LS1(k) be the mean offered traffic of
the PHB class k (including both RT and NRT traffic) and wk
denotes its WFQ weight. The following gives the detailed
steps to calculate the average application delay of elastic
traffic flows transmitted over the PHB class k.
Step 1: given the CS1, estimate the available bandwidth that
can be used for the PHB class k, denoted as CS1(k), using
equation (4). It shows that CS1(k) has a minimum bandwidth
that equals to the allocated bandwidth assigned by the WFQ
transport scheduler according to its weight wk (see equation
(1)), and also consider any additional bandwidth if the other
PHBs do not fully utilize their allocated bandwidth share.

C S1 (k ) = max

wk
C S1
i wi

C S1 L S1 ( j )

j k

(4)

Step 2: With the CS1(k) the normalized traffic load of the PHB
class k, denoted as k, can be derived with k = LS1(k)/ CS1(k),
for the given mean offered traffic over the PHB class k.
Step 3: For the PHB class k, apply the M/G/R-PS model to
estimate the application performance. Here R is determined by
Rk = C S1 (k ) / r . Here r, which is the maximum average UE
data rate, is the result of the air interface model calculated
from equation (3). It is noted that r is used for each PHB class,
since at the air interface there is no QoS prioritization, thus all
PHB classes have the same maximum average UE data rate.
Step 4: For the PHB class k, the expected sojourn time (or
average transfer time) for transferring a file of length xk can be
derived from the basic M/G/R-PS model [3], as given in
equation (5).
x
E (R , R ) x
(5)
E M / G / R PS {T ( x k )} = k 1 + 2 k k k = k f k
r
Rk (1 k ) r
Here E2 denotes Erlangs second formula (Erlang C formula),

which is given in equation (6) with Ak = Rk k. It is known that


the Erlang C formula calculates the delay probability (i.e. the
probability that a job has to wait) of Erlangs delay system. fk
is the delay factor of the TCP flows of the PHB class k over
the S1 interface.
R

E 2 ( Rk , Ak ) =

Ak k
Rk

Rk ! Rk Ak
Rk 1

i=0

(6)

Rk

Ak
A
Rk
+ k
i!
Rk ! Rk Ak

The basic M/G/R-PS model assumes ideal capacity sharing


among active flows. However, the TCP flows are not always
able to utilize their fair share of the available bandwidth.
During the TCP slow start phase the available bandwidth can
not be completely utilized at the beginning of transmission,
and thus the resulting transfer delay is longer than the
theoretically computed one from the basic M/G/R-PS model.
For small file transactions and longer round trip times, the
impact of the TCP slow start is more significant. If we do need
to consider the impact of TCP slow start, then an extended
M/G/R-PS model proposed in [4, 5] can be applied to
calculate the average transfer delay.

(7)
xk

rtt k + E M / G / R PS T ( x k x start k )
log
E ext {T ( x k )} = 2 MSS
n * rtt + E
k
M / G / R PS T ( x k x slow start k )

x k < x slow start k

x k x slow start k

In this extended M/G/R-PS model, the computation of the


expected transfer time includes two parts: the first part
estimates the total time needed for the slow start phase; the
second part estimates the time of sending the rest of the data
with the available share capacity using the basic M/G/R-PS
model. Here n* represents the required number of round trip
time (RTTs) before utilizing the available share capacity and
xslow-start denotes the amount of data sent within n* RTTs. If the
file size is smaller than xslow-start, then the amount of sent data
(which are only in the slow start phase) is denoted as xstart. It
can be seen that this calculation requires the information of
TCP segment size (MSS) and RTT. In our approach, a
minimum RTT rttmin is estimated by summing up all delays
through the end-to-end path: all node processing delays,
propagation delays (over the air interface, the S1 transport
links and the core network), and additional Internet delays.
When the air interface or the S1 links are congested, the
caused extra delays will be taken into account to the estimated
RTT. For the PHB class k the overall estimated RTT, denoted
as rttk, is estimated with equation (8).
(8)
rtt k = rtt min fUu f k
D. Bandwidth Dimensioning for Elastic Traffic
For a certain NRT service s (e.g. http or ftp) with a defined
QoS target, the objective of the S1 dimensioning is to
determine necessary S1 bandwidths which satisfy the desired
QoS requirements of each PHB class of this NRT service. The
dimensioning procedure is given in the following.
Step 1: define an initial S1 link capacity C0;
Step 2: use air interface model (refer to part B) to calculate the
delay factor of the Uu interface fUu;

Step 3: for the given S1 capacity, estimate the average transfer


delay for each PHB class of this service with the S1 model
(refer to Part C). If the obtained transfer delay of one PHB
class can not meet its QoS target, then the S1 capacity needs to
be increased. Thus for each PHB class k the required S1 link
bandwidth is derived numerically by performing delay
calculations for a range of bandwidths until the resulting
average transfer delay from a certain S1 bandwidth reaches the
defined application delay QoS target of that PHB class. This
step will be done for each PHB class of this service and the
derived bandwidth required for the PHB class k is CS1(k)s.
Step 4: we take the maximum bandwidth of all PHB classes to
be the required S1 capacity for the service s: CS1(s) = max.
{CS1(k)s}, as it satisfies QoS requirements of each PHB class.
Step 5: If there are several NRT services, we shall we repeat
step 1-4 to derive the bandwidth for each NRT service and
then take the maximum one to be the required S1 capacity
which meets the QoS targets of all NRT services: S1_BW =
max.{CS1(s)}. It needs to be noticed that this calculated S1
bandwidth may also include the traffic load of all RT services
if there are any. If there is also certain amount of RT traffic, to
derive the bandwidth only for NRT services, we shall subtract
the total RT traffic load (including all protocol overhead) from
S1_BW as in equation (9).
(9)
S1 _ BW NRT = S1 _ BW load ( RT ) j
j

IV.

DIMENSIONING MODEL FOR REAL TIME TRAFFIC

Different than elastic traffic, the required QoS of real time


(RT) traffic is the mean packet delay through the S1 interface
(i.e. S1 delay), which is on the packet level instead of on the
flow level. To calculate the S1 delay for each real time
application, we propose the M/D/1 model in this paper.
The RT applications or services, such as VoIP and video,
typically send packets at a fixed rate (according to the given
codec rate) with a fixed frame size and frame rate. Moreover,
when loss occurs they do not retransmit packets and have any
flow control mechanisms to adjust the data rate under
congestion situations. Thus, when a number of RT traffic
flows are transported over the S1 interface, the aggregated real
time traffic can be modeled as a superposition of fixed-rate
packet streams. When the number of RT users is large enough,
it can be assumed that they create a large number of
independent packets at the S1 link. Thus we can model the
packet arrival process with Poisson process. Furthermore, for
certain RT service the packet size is fixed. So given certain S1
bandwidth, the service rate of this RT service is also constant.
Therefore, we can apply the M/D/1 model to estimate the S1
delay of this RT service, assuming Poisson arrival process and
deterministic service rate. If the network only transmits RT
traffic, we propose to apply the M/D/1 model per PHB class
(transport priority), in order to consider the applied IP
DiffServ QoS scheme with WFQ scheduler at the S1 interface.
A. Estimation of the S1 delay for Real Time Traffic
The detailed analytical modeling is explained as follows. For
a certain RT service (e.g. VoIP), we define LRT(k) as the mean
traffic load of this RT service over the PHB class k at the S1

interface. Here LRT(k) can be derived by calculating the


corresponding application load (as explained in section III)
including additional protocol overheads. Let CRT be the total
S1 bandwidth needed for this RT service. The following gives
the full steps to calculate the average S1 delay of this RT
service transmitted over the PHB class k, where the LTE
transport network applies the WFQ scheduler to serve
different transport priorities.
Step 1: With CRT we can estimate the available bandwidth that
can be used for this RT service over the PHB class k, denoted
as CRT(k), with equation (10). It shows that CRT(k) has a
minimum bandwidth that equals to the allocated bandwidth
assigned by the WFQ transport scheduler according to its
weight wk (see equation (1)), and also consider any additional
bandwidth if other PHBs do not fully utilize their allocated
bandwidth share. It is noted that if all RT traffic is mapped to
an EF PHB with a strict transport priority over the elastic
traffic, then let CRT(k) = CRT since k = 1 in this case.

C RT (k ) = max

wk
C RT
i wi

C RT LRT ( j )

j k

(10)

Step 2: With CRT(k) and the given mean offered traffic LRT(k),
the normalized traffic load of the PHB class k, denoted as
RT(k), can be derived with RT(k) = LRT(k)/ CRT(k).
Step 3: For the PHB class k, we apply the M/D/1 model to
estimate the S1 delay performance. Firstly, we estimate the
average queue length of the M/D/1 model:
0.5 RT ( k ) 2
(11)
L M / D / 1 ( k ) = RT ( k ) +
1 RT ( k )
Step 4: Then with Littles law, the mean S1 delay of this RT
service on the PHB class k, can be derived with equation (12).
(12)
d M / D / 1 (k ) = L M / D / 1 (k ) / k
Here k is the packet arrival rate of this RT service over the
PHB class k, which can be derived from its offered S1 traffic
load on the PHB class k, i.e. LRT(k), and the packet length of
this RT service with equation (13).
(13)
k = LS 1 (k ) /
B. Bandwidth Dimensioning for Real Time Traffic
For RT traffic the objective of the dimensioning is to find
the necessary S1 bandwidth for a mean S1 delay target. The
dimensioning procedure is given in the following steps.
Step 1: define an initial S1 capacity for RT service j;
Step 2: for RT service j, estimate its mean S1 delay for each
PHB class of this service with the above method (refer to
equation 10-13). If the obtained S1 delay of one PHB class
can not meet the required S1 delay target, then the S1 capacity
needs to be increased. Thus for each PHB class k the required
S1 link bandwidth is derived numerically by performing delay
calculations for a range of bandwidths until the resulting
average S1 delay from a certain S1 bandwidth reaches the
defined S1 delay target. This step will be done for each PHB
class of this RT service. At the end, the bandwidth required for
the PHB class k is denoted as BWRT(k) j.
Step 3: we take the maximum bandwidth of all PHB classes to
be the required S1 capacity for the RT service j: BWRT(j)=

max.{BWRT(k)j}, as it will satisfy the QoS requirements of


every PHB class.
Step 4: If there are several RT services, we repeat step 1-3 to
derive the bandwidth for each RT service, and then sum up
their dimensioned bandwidths to be the required S1 capacity
for carrying total RT traffic in the network.
(14)
S1 _ BW = BW ( j )
RT

RT

V.

BANDWIDTH DIMENSIONING FOR S1

Usually the network transmits both elastic and real time


traffic, where the objective of the S1 dimensioning needs to
fulfill both the end-to-end application delay or throughput of
elastic traffic (or NRT services) and a mean packet delay
through the S1 interface for real time traffic. In this case, the
S1 dimensioning shall combine the dimensioning procedure
for both traffic types. From the proposed dimensioning
procedure for elastic traffic (explained in section III part D),
we can derive the required S1 bandwidth for supporting all
NRT services, i.e. S1_BWNRT. And for real time traffic, we
apply the dimensioning steps described in section IV to derive
the required S1 bandwidth S1_BWRT for supporting all RT
services in the network. At the end, the total required
bandwidth for the S1 interface is calculated as a sum of the
bandwidth required for individual traffic type in equation (15)
(15)
S1 _ BW = S1 _ BW NRT + S1 _ BWRT
It is noted that this bandwidth is only for the S1 user plane.
If there are additional IP bandwidths reserved for the control
and signaling traffic, then these extra bandwidths also need to
be added to compute the total S1 bandwidth. It shall be also
noticed that the proposed dimensioning approach can be used
for dimensioning of both uplink and downlink bandwidth.
VI.

RESULTS ANALYSIS

This section validates the applicability of the proposed


analytical models by comparing the analytical results with the
LTE system simulation results for different traffic scenarios.
For the following validations, we investigate a single eNB
scenario without mobility (i.e. no handover) on the downlink
direction. The eNB consists of 3 cells, each cell with a
capacity of 10Mbps.
Firstly, we validate the proposed dimensioning model for
elastic traffic. In the following, we investigate the scenario
with FTP traffic. The FTP traffic model is defined with a
constant file size of 2Mbyte or 5Mbyte, and with
exponentially distributed inter-arrival time between files. Each
cell has 10 FTP users and in total there are 30 users in the
eNB. In the first example, all users have the same QoS
priority, i.e. there is no prioritization in the transport network.
The configured S1 link bandwidth is 10Mbps. Fig. 3 shows
the average FTP transfer delay in seconds over different S1
utilizations. Both analytical results derived from the proposed
model based on M/G/R-PS (see section III) and the simulation
results obtained from the LTE system simulations are
presented and compared against each other. The left diagram
gives the results for the case of 2Mbyte file and the right one
gives the results for 5Mbyte file. It is seen that for both cases

Simulation

12

Analytical

average FTP transfer time (s)

average FTP tranfer time (s)

14

10
8
6
4
2
0

0.2

0.4
0.6
S1 utilization

0.8

25
20

0.4
0.35
0.3
0.25

15

0.2
0

10
5
0

Simulation
Analaytical

0.45

0.2

0.4
0.6
S1 utilization

0.8

Premium Users (AF PHB)

Basic Users (BE PHB)

20

average FTP transfer delay (s)

average FTP transfer delay (s)

25

simulation
analytical

15
10
5
0
0.4

0.45

0.5

0.55
S1 utilization

0.6

0.65

0.7

simulation
analytical

20
15
10

average S1 delay (ms)

In the second example the LTE network defines two user


groups: 50% premium users and 50% basic users. The
premiums UEs have higher priority and mapped to AF PHB in
the S1 transport network, while the basic UEs are mapped to
BE PHB. For the applied WFQ transport scheduler, the weight
of BE PHB is 1 whereas the weight of AF PHB is set to 10.
Fig. 4 shows the average FTP transfer time to download a
5Mbyte file for different S1 link utilizations per user priority.
The left figure shows the mean transfer delays of premium
UEs and the right one shows the delays of the basic UEs. Fig.
4 also demonstrates that the proposed analytical model can
provide a suitable estimation for the average application
delays for each user priority for the elastic traffic.

0.4
S1 utilization

0.6

Simulation
M/D/1

0.4
0.35
0.3
0.25
0.2
0

0.8

S1 delay (premium users)

Simulation - Premium
M/D/1 - Premium

0.8
0.6
0.4
0.2
0
0

0.2

0.4
S1 utilization

0.6

0.8

0.2

0.4
0.6
S1 utilization

0.8

S1 delay (basic users)


Simulation -Basic
M/D/1 - Basic

0.8
0.6
0.4
0.2
0
0

0.2

0.4
0.6
S1 utilization

0.8

Figure 6. Average S1 delay over S1 utilization (VoIP) 2 priorities

VII. CONCLUSION
In this paper, we present two different analytical models to
dimension the S1 bandwidths for elastic traffic and real time
traffic in the LTE access transport network. The analytical
models are validated by comparing with simulation results for
various traffic scenarios. The presented analytical results
match properly with the simulation results. It demonstrates
that the proposed analytical models can appropriately estimate
the application performances of different traffic and priorities
and thus can be used to dimension the LTE S1 interface.
REFERENCES

5
0
0.4

0.2

0.45

Figure 5. Average S1 delay over S1 utilization (VoIP) no priority

Figure 3. Average FTP application delay over S1 utilization (no priority)

25

average S1 delay (ms)

30

16

60UEs per eNB (20 UEs per cell)


0.5

Simulation
M/D/1

average S1 delay (ms)

Application Delay (file Size: 5MB)

Application Delay (file Size: 2MB)

30UEs per eNB (10 UEs per cell)


0.5

average S1 delay (ms)

the calculated average application delays match properly with


the simulated delays for different S1 utilizations.

0.45

0.5

0.55

0.6

0.65

0.7

1.

S1 utilization

Figure 4. Average FTP application delay over S1 utilization (2 priorities)

Secondly, we validate the proposed dimensioning model for


real time traffic. In the following examples, we investigate the
scenario with only VoIP traffic. The applied voice traffic
model uses the G.729A codec (8kbps coding rate) and has a
call duration of 90s. The configured S1 bandwidth is 5Mbps.
Fig. 5 presents the mean S1 delay over S1 link utilization for
two cases: 10 VoIP UEs per cell (i.e. 30 UEs per eNB) and 20
VoIP UEs per cell (i.e. 60 UEs per eNB). In these two cases,
all VoIP users are transmitted with the same priority. It shows
that the M/D/1 model can give proper evaluation for the
average S1 delay (in ms) compared to the simulations in both
cases. Furthermore, Fig. 6 presents the VoIP only scenario
with 10 UEs per cell where there are 50% premium users
(mapped to AF PHB) and 50% basic users (mapped to BE
PHB). We apply the approach described in section IV, using
the M/D/1 model per priority class. The results in Fig. 6 verify
the applicability of the M/D/1 model for dimensioning for RT
traffic with multiple priorities.

2.

3.
4.

5.

6.

7.

S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss. An


architecture for differentiated services. Request for Comments
(Informational) 2475, Internet Engineering Task Force, December 1998.
A. Demers, S. Keshav, and S. Shenker. Analysis and simulation of a fair
Queueing algorithm. J. Internetworking Res. Experience, pp. 326, Oct.
1990; also in Proc. ACM SIGCOMM89, pp. 312.
K. Lindberger. Balancing Quality of Service, Pricing and Utilisation in
Multiservice Networks with Stream and Elastic Traffic. In Proc. of the
International Teletraffic Congress (ITC 16), Edinburgh, Scotland, 1999.
A. Riedl, T. Bauschert, M. Perske, A. Probst. Investigation of the M/G/R
Processor Sharing Model for Dimensioning of IP Access Networks with
Elastic Traffic. First Polish-German Teletraffic Symposium PGTS 2000.
X. Li, R. Schelb, C. Grg And A. Timm-Giel. Dimensioning of UTRAN
Iub Links for Elastic Internet Traffic. In Proc. of the 19th International
Teletraffic Congress, Beijing, Sep. 2005.
X. Li, R. Schelb, C. Grg and A. Timm-Giel. Dimensioning of UTRAN
Iub Links for Elastic Internet Traffic with Multiple Radio Bearers. In
Proc. of the 13th GI/ITG Conference Measuring, Modelling and
Evaluation of Computer and Communication Systems, Nrnberg, 2006.
X. Li, W. Bigos, C. Goerg, A. Timm-Giel and A. Klug. Dimensioning of
the IP-based UMTS Radio Access Network with DiffServ QoS Support,
in Proc. the 19th ITC Specialist Seminar on Network Usage and Traffic
(ITC SS 19), at Technische Universitt Berlin, October, 2008.