Escolar Documentos
Profissional Documentos
Cultura Documentos
Deliverable D4.2
Experiments Definition and Set-up
Version 1.0
Page 1 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Document Control
Title:
Type:
Internal
Editor(s):
Roman apacz
E-mail:
romradz@man.poznan.pl
Author(s):
Doc ID:
D4.2-v1.0
AMENDMENT HISTORY
Version
Date
Author
Description/Comments
V0.5
Roman apacz
V0.6
June 6, 2014
David Hausheer
V0.9
Roman apacz
V0.9
Michael Seufert
RB-HORST experiments
V0.9.1
George Petropoulos
RB-HORST experiments
V0.9.1-0.1
V0.9.1-0.40
V0.9.1-0.41
Roman apacz
V0.9.1-0.49
V1.0
Legal Notices
The information in this document is subject to change without notice.
The Members of the SmartenIT Consortium make no warranty of any kind with regard to this document,
including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The
Members of the SmartenIT Consortium shall not be held liable for errors contained herein or direct, indirect,
special, incidental or consequential damages in connection with the furnishing, performance, or use of this
material.
Page 2 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Table of Contents
1 Executive Summary
2 Introduction
2.1
2.2
3 Experiments
3.1
3.2
OFS Experiments
3.1.1 Evaluation of multi-domain traffic cost reduction in DTM: S-to-S case
3.1.2 Evaluation of multi-domain traffic cost reduction in DTM: M-to-M case
EFS Experiments
3.2.1 Evaluation of caching functionality in RB-HORST
3.2.2 Large-scale RB-HORST++ Study
3.2.3 Evaluation of data offloading functionality in RB-HORST
4 Showcases
4.1
4.2
4.3
6
6
7
17
23
30
33
36
39
44
44
44
47
48
48
51
51
52
52
53
56
57
58
58
58
5 Summary
60
6 SMART Objectives
61
7 References
64
8 Abbreviations
65
9 Acknowledgements
66
Version 1.0
Page 3 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Page 4 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
1 Executive Summary
This deliverable D4.2 Experiments Definition and Set-up presents a detailed
description of experiments representing SmartenIT scenarios, both Operator Focused
(OFS) and End-user Focused (EFS), as defined in WP1 and matching the use-cases
proposed in WP2. WP3 selected and implemented two network traffic management
mechanisms for SmartenIT, namely DTM and RB-HORST, hence the experiments
described in this document are defined in order to evaluate these two solutions over the
SmartenIT test-beds.
Each experiment definition contains the following parts:
Such a format has been formalized to hand over complete instructions on how to run the
SmartenIT experiments and which metrics an parameters need to be collected from the
evaluation of prototype in order to properly assess the SmartenIT solutions.
The authors decided to focus on small set of experiments covering the challenges
addressed by the SmartenIT project. The experiments must clearly and accurately
evaluate the project solutions and the quality of pilot implementation.
Also, this deliverable reports on preliminary showcases. The project team demonstrated
running pilot implementation and major functionalities during the second year technical
review with the EC. Showcases can be considered as preliminary experiments aimed at
showing the basic behaviour of SmartenIT network traffic mechanisms in a test-bed
environment. The experience collected in preparation of the showcases was an important
input to the work on the final advanced experiments documented in this deliverable.
Version 1.0
Page 5 of 66
Copyright 2015, the Members of the SmartenIT Consortium
2 Introduction
The goal of this document is to provide definitions of SmartenIT experiments for the
evaluation of prototypes. Presented details instruct how to evaluate the network traffic
management mechanisms of SmartenIT scenarios, both Operator Focused Scenario
(OFS) and End-user Focused Scenario (EFS), as proposed in WP2 of the SmartenIT
project. Apart from test procedures, experimenters are equipped with sets of parameters,
metrics, test-bed configurations and other information to properly execute experiments.
The key requirement of each experiment is the use of prototype implementation created in
WP3. This will allow to evaluate algorithms of network traffic management mechanisms as
well as the quality of software implementation.
At the end of year 2, the project team prepared the showcases presenting the behaviour of
two network traffic mechanisms: DTM and RB-HORST. These are also reported in this
document since the experience achieved during preparation of showcases was an
important input to the further work on experiment definitions.
Page 6 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
3 Experiments
In this section, the experiment definitions of two types, namely OFS and EFS, are
described in details. The experiments are defined is a such way to validate the pilot
implementation developed in the SmartenIT project.
The second main experiment classification criteria is based on the tariff used for billing the
traffic on inter-domain links. For each of the above two groups of experiments we plan to
execute performance evaluation separately for volume-based tariff and 95th percentile
tariff.
Another dimension of experiment classification is a distinction of functionality tests and
performance evaluation tests. The former will be a short experiments to evaluate the
mechanism itself and whether the whole test-bed environment operates correctly. The
latter type of experiments will be used to evaluate the performance of DTM in a particular
test-bed configuration and under a few configuration settings to prove the benefits of using
the mechanism. Both qualitative and quantitative metrics and KPIs will be carefully
evaluated in this case.
The current status of specification and implementation of DTM++ does not allow to define
experiments. If it will be possible, experiment with S-to-S topology and 95th percentile tariff
is planned and will be presented in D4.3.
Common tools and settings
In OFS test-bed experiments we consider one or two domains (autonomous systems) that
perform traffic cost optimization and traffic management using DTM. Since DTM
Version 1.0
Page 7 of 66
Copyright 2015, the Members of the SmartenIT Consortium
operations are possible only if an AS is multi-homed it was assumed that each domain
performing DTM has two inter-domain links. Inbound traffic management is performed, i.e.
DCs receiving the traffic from remote DCs are hosted by AS-es that use DTM. For
simplicity we assume that domains hosting DCs being sources of the traffic are singlehomed. This assumption on test-bed construction does not impact on goals and scope of
the experiments.
There are two types of traffic: background traffic (non-manageable) and inter-DC
manageable traffic. The former is sent over inter-domain links and is dominating but DTM
does not influence this traffic. Background traffic is sent over default BGP paths. The
manageable traffic is sent between remote Data Centers. It consist of multiple flows of
various sizes [2] [3].
Physical test-bed configurations
There are three physical test-bed environments deployed: at TUD, PSNC and IRT
premises. All test-bed instances are compatible with the basic test-bed described in D4.1
and use some necessary extensions as detailed in the following [4].
Basic test-bed instance installed at TUD uses three physical servers. Each machine is
equipped with one Intel Xeon E5-1410 CPU @2.8GHz (4 cores, 8 threads) and 16GB of
RAM. Two of them are provided with 2TB Toshiba SATA3 enterprise HDD, last one with
1TB Seagate enterprise hard drive. Servers are interconnected with four physical 1Gbps
NICs as described in D4.1.
The mapping of logical topology to physical machines in TUD test-bed is presented at
Figure 1. It is presented on the example of the most complex logical topology, i.e., the one
for M-to-M group of experiments. Mapping for S-to-S, S-to-M and M-to-S can be obtained
by simply removing unused devices in logical topology and respective virtual machines in
physical test-bed.
The test-bed installed at PSNC premises uses only 2 powerful servers instead of the 3
physical machines proposed in the reference basic test-bed design. Server 1 is equipped
with 2 CPUs @2,4GHz with 6 cores each as well as 48GB of RAM. Server 2 comprises
the same number of CPUs and cores with total amount of 64GB of RAM. Servers are
interconnected with two physical Ethernet 1Gb/s links.
There are in total 28 virtual machines deployed in the test-bed (14 VMs on each server)
what allows for conducting DTM M-to-M experiments. VMs hosting SmartenIT prototype
software as well as traffic generators and receivers (for both inter-DC and background
traffic emulation) are running Ubuntu 14.04 64bit operating systems.
IRT test-bed follows the same strategy of PSNC test-bed, with a two physical server
deployment. The environment is described as follows: first server is equipped with four
CPU (X3210 @ 2.13GHz) and with 8GB RAM and 1TB of HDD disk space, while second
server is equipped with 32CPU (CPU E5-2450 @ 2.10GHz), 64G RAM and 1TB of HDD
disk space. All VM residing over two servers related to the SmartenIT project have been
created starting from Ubuntu 14.04 (x64). Servers have been connected over two
dedicated 1Gbps link, while management has been provided over separated link.
Apart from being compatible with the basic test-bed design, test-bed implements a set of
required extensions described in D4.1.
Page 8 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Cloud A
DC-A
Traffic
receiver RA1
BG-1.1
Traffic
generator
(sender) GA1
PC3
Cloud B
DC-B
AS3
DA-1
Traffic
receiver
AS2
BG-1.2
AS1
Traffic
receiver RA2
PC1
S-Box
Cloud C
DC-C
DC traffic
generator
sender
Traffic
generator
(sender) GC1
SDN controller
S-Box
Traffic
Receiver RC1
Traffic
generator
(sender) GA2
Cloud D
DC-D
PC2
AS5
DA-4
DC traffic
generator
sender
BG-4.1
Traffic
receiver
AS4
BG-4.2
Traffic
receiver RC2
Traffic
generator
(sender) GC2
SDN controller
S-Box
S-Box
Version 1.0
Page 9 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Cloud A
DC-A
Traffic
receiver RA1
BG-1.1
Traffic
generator
(sender) GA1
Cloud B
DC-B
AS3
DA-1
Traffic
receiver
AS2
BG-1.2
AS1
Traffic
receiver RA2
PC1
S-Box
Cloud C
DC-C
DC traffic
generator
sender
Traffic
generator
(sender) GC1
SDN controller
PC2
S-Box
Traffic
Receiver RC1
Traffic
generator
(sender) GA2
Cloud D
DC-D
AS5
DA-4
DC traffic
generator
sender
BG-4.1
Traffic
receiver
AS4
BG-4.2
Traffic
receiver RC2
Traffic
generator
(sender) GC2
SDN controller
S-Box
S-Box
Each time a flow is started, a template is selected and applied. Each template is
configured by:
a definition of flow length (i.e., time from start to last packet sending) distribution,
The advantages of the used generator over other available tools are stability, robustness,
efficiency, and portability. It uses a stable Well19937c pseudo-random number generator.
It has been tested for 24/7 operation stability with and without a receiver and it was verified
that each instance is able to generate dozens Mbps of traffic. A sample configuration file is
presented below:
Page 10 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Version 1.0
Page 11 of 66
Copyright 2015, the Members of the SmartenIT Consortium
L1 total traffic
(background+manageable)
DA
BG1
BG2
AS
S-Box
tun 2 traffic
(manageable)
L2 total traffic
(background+manageable)
Page 12 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
point metrics that represent actual benefits of DTM and are calculated for ended
billing period
live metrics that are observed during the billing period and allows to estimate the
performance of DTM in real-time.
Two important metrics are expected absolute value of inter-domain traffic cost on each
link and summary cost. Those values are calculated as follows:
= ( )
=
The total cost the ISP pays for inter-domain traffic in a billing period is = .
The total amount of manageable traffic that was received via inter-domain link is
denoted by
. If DTM were not used all the manageable traffic from DC serving as a
source of the traffic to the DC receiving the traffic would pass a default BGP path. This
observation leads to a definition of the first KPI:
(1)
1 (1 ) + 2 (2 )
=
1 (1 + 2 ) + 2 (2 2 )
(2)
1 (1 ) + 2 (2 )
=
1 (1 1 ) + 2 (2 + 1 )
KPI (1) denotes the relative monetary gain of using DTM. It is a ratio of total cost with
traffic management to the total cost without traffic management, i.e., the case when a
default BGP path is used (all manageable traffic passes inter-domain link 1). In turn (2)
denotes a monetary gain of balancing the traffic with DTM instead of using link 2 as default
Version 1.0
Page 13 of 66
Copyright 2015, the Members of the SmartenIT Consortium
BGP path. If both values are lower than 1 it means that an ISP benefits from using DTM
regardless which link would be used as default BGP path. If for instance 1 is greater or
equal to 1 that means that it is better for ISP to transfer all manageable traffic via link 1,
i.e., use a this link as a default BGP path.
In turn, the absolute cost benefit (or loss) from using DTM is expressed as:
(1) = 1 (1 + 2 ) + 2 (2 2 ) 1 (1 ) 2 (2 )
or
(2) = 1 (1 1 ) + 2 (2 + 1 ) 1 (1 ) 2 (2 )
if link 1 or 2 is considered as a default path, respectively.
Another KPI represents relation of achieved cost to the cost expected if the achieved
distribution of traffic among links was exactly equal to the reference vector. It can be
defined as:
=
Live performance metrics are built on a periodic traffic measurements during the billing
period. Let assume that the billing period of length is divided into measurement
periods of length , where = 0. Let's denote by the time that elapsed from the
beginning of the current billing period to the moment of collection of i-th measurement,
where [1, ]. In other words, = .
, = ,
Then the cost of the traffic on link expected by the end of the billing period estimated at
time is calculated as
, = (, )
Page 14 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
1,
1,+1
+1
1,
1,+1
set
which contains highest 5-minute samples collected during the billing period on link
:
= {, : , = () [1, ]},
95
Therefore, the value taken for cost calculation is defined as
= min
. Also,
=
, where is the sequential number of a smallest sample in set . Finally, the cost of
inter-domain traffic on link is calculated as
95 )
= (
The total cost the ISP pays for inter-domain traffic in a billing period is = .
Version 1.0
Page 15 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Since all 5-minute samples are collected (set ) and for each sample we know how
= {1,1 + 2,1 , , 1, + 2, , , 1, + 2, }
,(1)
= {2,1 2,1 , , 2, 2, , , 2, 2, }
1
2
,(1)
The next step is to find corresponding sets of highest samples on each link: 1
and
,(1)
2 . Then the smallest samples in each set are found. Sizes of those samples are used
to calculate the cost that an ISP would have to pay without using DTM. Similarly to the
approach presented for total volume based tariff the following KPIs can be defined:
relative monetary gain of using DTM instead of using link 1 as a default BGP path:
(1)
1 (195 ) + 2 (295 )
1 (min 1,(1) ) + 2 (min 2,(1) )
absolute cost benefit (or loss) from using DTM is expressed as:
(1) = 1 (min 1,(1) ) + (min 2,(1) ) 1 (195 ) 2 (295 )
If we assume that link 2 belongs to a default BGP path, then expected traffic samples
without DTM are defined as follows:
,(2)
= {1,1 1,1 , , 1, 1, , , 1, 1, }
,(2)
= {2,1 + 1,1 , , 2, + 1, , , 2, + 1, }
1
2
1 (195 ) + 2 (295 )
1 (min 1,(2) ) + 2 (min 2,(2) )
absolute cost benefit (or loss) from using DTM is expressed as:
(2) = 1 (min 1,(2) ) + 2 (min 2,(2) ) 1 (195 ) 2 (295 )
The KPI representing the relation of the cost achieved to the cost expected if the achieved
distribution of traffic among links was exactly equal to the reference vector is also valid for
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
As mentioned at the beginning of this section to calculate the cost of inter-domain traffic it
is necessary to a set know the size of the smallest sample in a set of = 0.95 + 1
highest samples. Therefore, to estimate the expected traffic cost during a billing period we
need to collect samples every 5 minutes and update a set of highest samples. Let's
define a temporary set of samples as
,
= {,1 , , , }
where |,
| = . ,
is a set of samples collected on link from the beginning of the
current billing period until time . Thus, the set is updated every 5 minutes. Additionally,
,
= {, : , ,
= () [1, ]}
, = min ,
Then the cost of the traffic on link expected by the end of the billing period estimated at
time is calculated as
, = (, )
Note that after all samples are collected (the end of billing period) the estimated cost
, = since ,
equals the actual one:
=
.
Finally for a multi-homed domain having inter-domain links the estimation at time of
=
, .
the total cost the ISP expects to pay is calculated as
3.1.1 Evaluation of multi-domain traffic cost reduction in DTM: S-to-S case
The goal of this experiment is to evaluate DTM functionality and performance. The usecase considered is "Bulk data transfer for cloud operators. The logical topology for this
experiment is presented in Figure 5. There are two domains hosting DCs: AS1 and AS3.
Data center located at AS3 (DC-B) serves as a source of manageable traffic, while DC-A
receives the traffic. AS1 performs management of inbound inter-domain traffic to reduce
costs of inter-domain traffic. Using DTM it influences distribution of manageable traffic
among two inter-domain links (L1 and L2) in a cost efficient way. It is achieved by selecting
one of the tunnels (tun 1 or tun 2) for flows originated at DC-B.
Version 1.0
Page 17 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Traffic
generator
(sender)
Cloud A
DC-A
AS2
Traffic
receiver
AS3
DA-1
Traffic
receiver
Cloud B
DC-B
DC traffic
generator
sender
BG-1.1
BG-1.2
AS1
SDN controller
Traffic
receiver
S-Box
S-Box
Traffic
generator
(sender)
BGP router
Intra-domain router
Inter-domain link
Intra-domain link
Usage
10.0.1.0/24
Interconnection ISP1-ISP2
10.0.2.0/24
Interconnection ISP1-ISP2
10.1.1.0/30
Interconnection ISP2-ISP3
10.10.1.0/24
ISP1
10.10.2.0/24
ISP1
10.10.3.0/24
ISP1
10.1.2.0/24
ISP2
10.1.5.0/24
ISP2
10.1.6.0/24
ISP2
10.1.3.0/24
ISP3
10.1.4.0/24
ISP3
Page 18 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Version 1.0
Page 19 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Services
Comment
dtm-isp1-rtr-bg1
dtm-isp1-rtr-bg2
dtm-isp1-rtr-da
dtm-isp1-vmdc
dtm-isp1-vmsbox
S-Box
dtm-isp1-vmtr1
dtm-isp1-vmtr2
dtm-isp2-rtr-bg1
dtm-isp2-rtr-bg2
dtm-isp2-vmtg1
dtm-isp2-vmtg2
dtm-isp3-rtr-bg1
dtm-isp3-ofda
dtm-isp3-vmdc
dtm-isp3-vmsbox
S-Box
dtm-isp3-sdn
SDN Controller
Measurement point
Notation
Frequency
1,
and 2,
= 30
1,
and 2,
= 30
Page 20 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
manageable traffic
Achieved values of
total traffic on interdomain links
1 and 2
1 and 2
Achieved values of
manageable traffic
Compensation
vector
s-Box at AS1
= 30
Reference vector
s-Box at AS1
Measurement point
Sets of samples
of manageable
traffic by the end
of billing period
Compensation
vector
Reference vector
Notation
Elements of sets:
1,
and 2,
Elements of sets:
1,
and 2,
Sets 1 and 2
and
samples 195 and 295
Frequency
= 5 min
= 5 min
1 and 2
s-Box at AS1
= 30
s-Box at AS1
Version 1.0
Page 21 of 66
Copyright 2015, the Members of the SmartenIT Consortium
, 1 and 2
(1) , 1 , 2 , (1)
(2) , 1 , 2 , (2)
,
1, and
2,
(1)
(1)
(2)
(2)
Test procedures
The following three main stages of experiment are defined.
Stage 1 Functionality test
The main purpose of functionality test is a coarse observation of procedures of DTM
mechanism and evaluation of it in order to validate the implementation. For such test
billing period should be setup to one hour. Traffic envelope of generators (both,
background and DC-DC traffic) should be flat. After two trial billing period (system warm
up), during third hour some burst of background traffic will be manually injected to the
network in order to evaluate proper compensation of it. The functionality test will be
performed for volume and 95th percentile based tariff test-bed setup.
Stage 2 Performance evaluation test for volume based tariff
Performance evaluation of DTM for volume based tariff will be performed with usage of
KPIs. Billing period should be setup to one week. Since DTM will be started without any
initial setup (initial values of Reference and Compensation vectors) first week is needed to
collect traffic statistics in order to calculate Reference and Compensation vectors. After
that statistics from next two billing period will be collected for evaluation purposes.
Performance evaluation test will be started with daily envelope traffic pattern for both, DCDC traffic and background traffic generators.
Stage 3 Performance evaluation test for 95th percentile based tariff
Performance evaluation test for 95th percentile based tariff will be performed with usage of
KPIs. Similarly as for performance evaluation test for volume based tariff, one week billing
period will be used. The measurement will be done during two billing period after the initial
belling period. DC-DC and background traffic profile with daily envelope pattern will be
used.
Page 22 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Traffic
receiver RA1
BG-1.1
Traffic
generator
(sender) GA1
Cloud B
DC-B
AS3
DA-1
Traffic
receiver
AS2
BG-1.2
AS1
Traffic
receiver RA2
S-Box
Cloud C
DC-C
DC traffic
generator
sender
Traffic
generator
(sender) GC1
SDN controller
S-Box
Traffic
Receiver RC1
Traffic
generator
(sender) GA2
Cloud D
DC-D
AS5
DA-4
DC traffic
generator
sender
BG-4.1
Traffic
receiver
AS4
BG-4.2
Traffic
receiver RC2
Traffic
generator
(sender) GC2
SDN controller
S-Box
S-Box
Deployment infrastructure
The deployment infrastructure for M-to-M experiment is presented in Figure 8. Addressing
scheme is presented in Table 6. Description of all virtual machines can be found
in Table 7.
Version 1.0
Page 23 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Page 24 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Table 6 Detailed IP Address table for the production network for the DTM evaluation.
IP Address or Address Range
Usage
10.0.1.0/24
Interconnection ISP1-ISP2
10.0.2.0/24
Interconnection ISP1-ISP2
10.0.3.0/24
Interconnection ISP4-ISP2
10.0.4.0/24
Interconnection ISP4-ISP2
10.1.1.0/24
Interconnection ISP2-ISP3
10.1.12.0/24
Interconnection ISP2-ISP5
10.10.1.0/24
ISP1
10.10.2.0/24
ISP1
10.10.3.0/24
ISP1
10.1.2.0/24
ISP2
10.1.5.0/24
ISP2
10.1.6.0/24
ISP2
10.1.3.0/24
ISP3
10.1.4.0/24
ISP3
10.10.7.0/24
ISP4
10.10.9.0/24
ISP4
10.10.10.0/24
ISP4
10.1.10.0/24
ISP5
10.1.11.0/24
ISP5
Services
Comment
dtm-isp1-rtr-bg1
dtm-isp1-rtr-bg2
dtm-isp1-rtr-da
dtm-isp1-vmdc
dtm-isp1-vmsbox
S-Box
dtm-isp1-vmtr1
dtm-isp1-vmtr2
dtm-isp2-rtr-bg1
Version 1.0
Page 25 of 66
Copyright 2015, the Members of the SmartenIT Consortium
dtm-isp2-rtr-bg2
dtm-isp2-vmtg1
dtm-isp2-vmtg2
dtm-isp3-rtr-bg1
dtm-isp3-ofda
dtm-isp3-vmdc
dtm-isp3-vmsbox
S-Box
dtm-isp3-sdn
SDN Controller
dtm-isp4-rtr-bg1
dtm-isp4-rtr-bg2
dtm-isp4-rtr-da
dtm-isp4-vmdc
dtm-isp4-vmsbox
S-Box
dtm-isp4-vmtr1
dtm-isp4-vmtr2
dtm-isp5-rtr-bg1
dtm-isp5-ofda
dtm-isp5-vmdc
dtm-isp5-vmsbox
S-Box
dtm-isp5-sdn
SDN Controller
Page 26 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Table 8 Measurement points and measured values: experiment with volume based tariff.
Domain
AS1
AS4
Measured value
Measurement
point
Notation
Frequency
AS1 border
Temporary values of routers:
total traffic on interBG-1.1 and BGdomain links in AS1
1.2
1,
and 2,
= 30
Temporary values of
manageable traffic in DA-1 router at AS1
AS1
1,
and 2,
= 30
AS1 border
Achieved values of routers:
total traffic on interBG-1.1 and BGdomain links in AS1
1.2
1 and 2
End of billing
period:
Achieved values of
manageable traffic in DA-1 router at AS1
AS1
1 and 2
End of billing
period:
Compensation vector
s-Box at AS1
in AS1
= 30
Reference vector in
s-Box at AS1
AS1
End of billing
period:
AS4 border
Temporary values of routers:
total traffic on interBG-4.1 and BGdomain links in AS4
4.2
1,
and 2,
= 30
Temporary values of
manageable traffic in DA-4 router at AS4
AS4
1,
and 2,
= 30
AS4 border
Achieved values of routers:
total traffic on interBG-4.1 and BGdomain links in AS4
4.2
1 and 2
End of billing
period:
Achieved values of
manageable traffic in DA-4 router at AS4
AS4
1 and 2
End of billing
period:
Compensation vector
s-Box at AS4
in AS4
= 30
Reference vector in
s-Box at AS4
AS4
End of billing
period:
Version 1.0
Page 27 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Table 9 Measurement points and measured values: experiment with 95th percentile based
tariff.
Domain
Measured value
Measurement
point
AS1 border
5-minute samples of routers:
total traffic on interBG-1.1 and BGdomain links
1.2
Share
of
manageable traffic DA-1 router at AS1
in 5-minute samples
AS1
Sets of samples on
each inter-domain
link by the end of
billing period and
the size of sample
used for billing
AS1 border
routers:
BG-1.1 and BG1.2
Sets of samples of
manageable traffic
DA-1 router at AS1
by the end of billing
period
Frequency
Elements of
sets:
= 5 min
1,
and
2,
Elements of
sets:
1,
and
2,
Sets 1 and
2
and
samples 195
and 295
= 5 min
End of billing
period:
1 and 2
End of billing
period:
Compensation
vector
s-Box at AS1
= 30
Reference vector
s-Box at AS1
End of billing
period:
AS4 border
5-minute samples of routers:
total traffic on interBG-4.1 and BGdomain links
4.2
Share
of
manageable traffic DA-4 router at AS4
in 5-minute samples
AS4
Notation
Sets of samples on
each inter-domain
link by the end of
billing period and
the size of sample
used for billing
AS4 border
routers:
BG-4.1 and BG4.2
Sets of samples of
manageable traffic DA-4 router at AS4
by the end of billing
Elements of
sets:
1,
and
2,
Elements of
sets:
1,
and
2,
Sets 1 and
2
and
samples 195
and 295
1 and 2
Page 28 of 66
= 5 min
= 5 min
End of billing
period:
End of billing
period:
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
period
Compensation
vector
s-Box at AS4
= 30
Reference vector
s-Box at AS4
End of billing
period:
,
1, and
2,
Test procedures
As for a single-to-single case the three stages for multi-to-multi case are proposed:
stage 1 functionality test, stage 2 performance test with volume based tariff,
stage 3 performance tests with 95th percentile based tariff. Description of each of them
is an extension of text regarding the single-to-single test procedure.
Stage 1 Functionality test
The main goal of functionality test is a basic evaluation of DTM mechanism in multi-tomulti experiment configuration. As for single-to-single experiment, billing period will be
setup to one hour. Since more complex topology will be used, more traffic generators and
receivers will be utilized. Each traffic generator (background and DC) will be configured to
generate flat envelope traffic pattern. The observation of DTM procedures will be done
during two billing periods. In order to validate DTM mechanism some bursts of background
traffic affecting both receiving ISP will be injected. Functionality test will be performed for
volume and 95th percentile based tariff.
Stage 2 Performance evaluation test for volume based tariff
Performance evaluation test for volume based tariff will be performed in order to calculate
KPIs, but calculated separately for two receiving domains (AS1 and AS4). Test setup is
the same as for single-to-single scenario (billing period one week, usable observation
time 2 billing periods, traffic envelope daily profile).
Stage 3 Performance evaluation test for 95th percentile based tariff
Performance evaluation test for 95th percentile based tariff will be performed with usage of
performance metrics and KPIs, but calculated separately for two receiving domains (AS1
Version 1.0
Page 29 of 66
Copyright 2015, the Members of the SmartenIT Consortium
and AS4). Test setup is the same as for single-to-single scenario, i.e., billing period one
week, usable observation time 2 billing periods, traffic envelope daily profile.
The caching experiment (Section 3.2.1) aims at validating and evaluating the
performance of RB-HORST mechanisms caching and proxying functionality in a
test-bed environment.
The large-scale study (Section 3.2.2), will test the RB-HORST platform in a realworld environment with real users, and will extract all the required measurements to
evaluate the performance of the social and overlay prediction algorithms, and what
are the benefits from content prefetching for end-users and ISPs.
Finally, the mobile data offloading experiment (Section 3.2.3) will monitor the
energy consumption of uNaDas and smartphones, and evaluate the bandwidth and
energy consumption savings of WiFi offloading under realistic bandwidth conditions.
Figure 9 shows the basic topology of the caching and mobile data offloading experiments.
It consists of 3 NSP domains, 2 access (AS1 and AS3) and 1 transit one (AS2), and 3
users (Andri, Sergios and George), each one having Internet access through their
respective ISP. The transit NSP provides access to the rest of the Internet, e.g. Facebook
and Vimeo servers.
In each users premises there is a uNaDa, which is a Raspberry Pi hosting the RB-HORST
software. Each uNaDa is assigned to its owner with his Facebook credentials and provides
2 SSIDs, one open but with no Internet access, and one private with full Internet access. In
addition, each user owns an Android smartphone to access the Internet, with RB-HORST
Android application and at least a web browser, installed. Of course, depending on the
experiment, we could have multiple users, with their respective Android smartphones,
accessing and connecting to the uNaDas.
Page 30 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Figure 10 shows the mapping of the EFS basic topology to the actual test-bed. As
indicated in the figure, PC/ISP1, 2 and 3 map to ASes 2, 1 and 3 respectively, meaning
that ISP1 is the transit domain, and ISPs 2 and 3 are the access ones.
Version 1.0
Page 31 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Figure 10: Mapping of the EFS basic topology to the SmartenIT test-bed
Page 32 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
In addition, the following tables also provide the IP addresses and the services used to run
the EFS experiments.
Table 11 Detailed IP Address table for the production network for the EFS experiments.
IP Address or Address Range
Usage
10.201.0.0/18
Interconnection
10.201.50.0/30
Interconnection ISP1-ISP2
10.201.50.4/30
Interconnection ISP2-ISP3
10.201.50.8/30
Interconnection ISP1-ISP3
10.201.64.0/18
ISP1
10.201.100.0/27
10.201.128.0/18
ISP2
10.201.150.0/27
10.201.191.0/24
10.201.192.0/18
ISP3
10.201.200.0/27
10.201.255.0/24
Table 12 Hosts and the services running on them for EFS experiments.
Host
Services
isp1-rtr1
isp1-rtr2
isp2-rtr1
isp2-un1
isp3-rtr1
isp3-un1
isp3-un2
Comment
Hosts
the
HORST
RBH_Secured SSIDs.
and
Version 1.0
Page 33 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Deployment infrastructure
This specific experiment focuses on access domain AS1 (ISP3), aiming to evaluate the
caching capabilities of a single uNaDa. For this purpose, several Android smartphones
with the RB-HORST Android application and a web browser installed, and the uNaDa
hosting the RB-HORST software, are required. Figure 11 presents the test-bed segment
that is required to run the experiment.
Cache size: 0MB (no caching), 128MB, 256MB, 512MB, 1GB (reference will be
selected after cache size performance study)
Video request rate: 1/16min, 1/8min (reference), 1/4min, 1/2min, 1/1min, 1/0.5min
Request generator: same video, random video (100 videos, uniform distribution),
catalogue (reference, 100 videos, Zipf-distributed probability), avg. video length:
3min
Page 34 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Content request time (request sent on end user device, request arrived at uNaDa)
Content serve time (content sent from uNaDa, content arrived at end user device)
Up-/downlink traffic traces measured at uNaDa (end user device - uNaDa, uNaDa Vimeo)
Cache hit rate of uNaDa = #(requests served from cache) / #(requests arrived)
Requests served by uNaDa = #(content serve time) [compare to video request rate]
Bandwidth utilization from traffic traces (download bandwidth at end user device
[uNaDa QoS], amount of traffic to Vimeo [inter-domain traffic saved])
Energy consumption
QoE (compute stalling events from download bandwidth and video bitrate)
Test procedures
The following tests are defined to validate functionality and estimate performance.
Functionality tests
The functionality tests ensure that the caching functionality works as expected. Therefore,
home routers and end devices must be set up and the home router has to be registered in
the overlay. It must be tested that content requests are sent from the end device and
content can be consumed. The consumed content has to be cached on the home router
and a subsequent request to that content must be served from the cache of the home
router.
Figure 12 shows the topology of the caching functionality tests. A uNaDa is located in the
AS of an access provider. The AS is connected to the Internet via AS2 which might be a
transit provider. In the Internet, video content can be accessed from its providers.
The test is set up as follows: Google Nexus 5 smartphones have RB-HORST installed.
uNaDa A has Internet connection and SSID HORST A_AP. Users A and C watch common
videos and are connected to via WiFi to A_AP.
In the functionality test reference scenario both users A and C downloaded the same video
as usual without RB-HORST functionality.
With the use of RB-HORST in the uNaDa a video downloaded by user A is cached on the
uNaDa. User C downloads the same video. The request is intercepted by the uNaDa and
served by the uNaDa cache.
Version 1.0
Page 35 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Performance study cache size: reference set-up but change cache size (will provide
a reference cache size for further performance tests)
Performance study number of end user devices: reference set-up but change
number of end devices
Performance study inter-arrival time of requests: reference set-up but change video
request rate
Performance study request strategies: reference set-up but change request strategy
Page 36 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
The study will be conducted by at least 40 participants at TUD, UZH, UniWue, and AUEB.
Additional nodes may be emulated in EmanicsLab (LXC containers) if needed. The goal of
the study is to have participants in at least 4 different locations throughout Europe.
For the basic prefetching functionality, the messaging overlay, the content prediction, and
the cache management has to be operable. Furthermore, for the mobile offloading
features of RB-HORST a large number of participants with realistic social connections are
required. The performance of prefetching has to consider distributed home routers over
multiple ASes and a social network between the users. The impact of the content request
rate, of the content request strategy, and the home router locations will be investigated.
The results will be analyzed in terms of prefetching efficiency, bandwidth utilization (saved
inter-domain traffic), energy consumption, and subjective QoE. The usage patterns
required by the mobile offloading features including the interaction of trusted and untrusted
users are investigated by project members and associated University students.
Deployment infrastructure
Access points are used as home routers running RB-HORST. Each of them includes WiFi
as well as a wired uplink port. The uplink port is used by the participants to connect the
devices to their home Internet access. Furthermore, the participants use their smartphones
as mobile devices.
Equipment set:
Parameters end-device:
Version 1.0
Page 37 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Home router device status (CPU usage, network interface counters, available
space on local storage) [periodically 1/s]
Video Request Events (time, video ID, title, length, size, source,
download time) [Event]
Video Prefetching Events (time, video ID, title, length, size, source,
download time) [Event]
Video Serving Events (time, video ID, title, length, size, source, download
time) [Event]
Overlay neighbors (IP address, user) in fixed time intervals e.g. every
hour [Event]
WiFi on own home router [on user request, Event: connection change from
Cellular]
Page 38 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
CPU utilization
Screen status
Metrics:
The following metrics are calculated based on the data collected during the experiments.
The calculations are conducted after the experiment on data collection server.
Test procedures
The test procedures are conducted by the participants under supervision. Each home
router device is handed out to a group of at least two participants.
The home router is configured by one of the participants at home and connected to their
home network. This network connection is used for Internet connectivity. This participant
configures the home router to represent her/his Facebook identity in the RB-HORST
system. All participants install the RB-HORST application on their Android smartphone and
set it up with their credentials.
Relying on the existing Facebook relation of the participants, they are encouraged to
interact, use the RB-HORST system by visiting each others location and post RB-HORST
compatible content on their Facebook wall. The participants trigger measurements on the
home router and on their smartphone. Finally, each participant fills out a survey on their
experience and information on their home network.
3.2.3 Evaluation of data offloading functionality in RB-HORST
The goal of this experiment is the comparison of the potential of bandwidth and energy
consumption savings of WiFi offloading under realistic bandwidth conditions as
experienced by the end-user at home or while moving. For that purpose, the measurement
of the probability that users offload at uNaDas of social contacts as well as the achievable
savings of overall bandwidth and energy is compared to a transmission via 3G/4G.
Version 1.0
Page 39 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
model. This model is generated by simultaneously measuring the power consumption and
monitoring the system utilization of the underlying hardware. Using regression approaches
the power model is calculated.
The power model derived from the regression analysis is then used to convert system
utilization samples on each uNaDa to power estimates, which are sent to the energy
analyzer for aggregation. These models result in a low error (<5%) when applied to the
same device type.
This approach is also feasible for smartphones. Here, some additional effects have to be
considered, as the user interacts with the device and interfaces are only active for some
time. Still, on some devices (e.g. Nexus 5), it is possible to directly read the power
consumption using low level system calls or the power API.
Using the power estimates from the participating device, collected on the energy analyzer,
consequently allows deriving the cost of a data transmission between any two participating
devices. To analyze the energy cost of the RB-HORST mechanism, the power
consumption as recorded by the energy analyzer is correlated with the traffic caused by
RB-HORST to determine the overall cost of the mechanism. Further, it is also possible to
derive the cost of arbitrary data transfers and scheduling decisions of the mechanism.
For a derivation of the overall social mobile offloading potential, interactive experiments
with users in the context of the large-scale user study as described in Section 3.2.2 are
foreseen. Before the experiments, a survey is handed out to users to fill in basic data that
cannot be logged easily automatically, i.e., the name of the person deploying the uNaDa,
the location of the deployment (the address) and the nominal bandwidth of the DSL
connection as well as the ISP. Moreover, a number of technical parameters will be logged
by the RB-HORST uNaDa (social log) as well as the Mobile App (offloading log).
Parameters, Measurements and Metrics
To assess the energy cost of caching a video, the system utilization (i.e. network traffic on
each interface in each direction, CPU utilization) on the uNaDas participating in the
content distribution needs to be monitored. Based on the power model of the uNaDa, the
power consumption for receiving this video is derived.
The cost of transferring a cached video to the mobile device consists of the energy
consumption of the mobile device and the uNaDa using the respective power models.
This cost can then be compared with the cost of streaming a video directly from the server.
For this, an energy model of the server must be assumed, while the power draw of the
uNaDa and the smartphone can be calculated using the calibrated power models.
The parameters of the experiment are:
uNaDa
o Power
Version 1.0
Page 41 of 66
Copyright 2015, the Members of the SmartenIT Consortium
o Or:
Ethernet traffic in
WiFi traffic in
CPU utilization
Smartphone
o Power
o Or:
CPU utilization
WiFi traffic in
Cellular traffic in
Display brightness
Other components
To calculate the full cost of the mechanism, the derived measurements are then to be
combined with the efficiency of the prefetching algorithm, including the cost of needlessly
fetched videos. This is done in a post-processing step on the evaluation server.
For the derivation of the overall potential of social mobile offloading, two logs are
necessary: The social log (Table 13) is logged regularly once per day by RB-HORST on
the uNaDa. The purpose of this log is the dumping of the social connections of the owner
of the access point.
Table 13 The social log structure of RB-HORST
1. line
MD5(facebook_name_of_unada_owner),
offloading
2. line
MD5(unada_owners_friend_one)
3. line
MD5(unada_owners_friend_two)
n+1. line
SSID
used
for
social
MD5(unada_owners_friend_n)
Page 42 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Second, the offloading log (Table 14) contains all offloading events and is logged by the
App. The log is updated with a frequent sampling interval.
MD5(facebook_name_of_Smartphone_owner),
SSID
offloading took place, timestamp, offloaded volume
..
n+1. line
MD5(facebook_name_of_Smartphone_owner),
SSID
offloading took place, timestamp, offloaded volume
at
which
at
which
Both logs are pushed to a central measurement server regularly (once per hour). The MD5
hashes are necessary to maintain the privacy of users. The measured data is sufficient to
reconstruct the social graph and all offloading events.
Test procedures
Non-Interactive experiment for energy consumption:
1. Connect mobile phone to cellular network
2. Stream video via cell interfaces and measure
3. Connect to RB-Horst AP
4. Stream video via RB-Horst AP and measure
5. Stream/load cached video via RB-Horst AP and measure
6. Compare consumed energy for each option (2, 4, 5)
Interactive experiment for social mobile offloading:
1. Access point users are encouraged to use the RB-HORST system actively
2. The offloading events and social relations are logged to the social log and the
offloading log, respectively
3. The logs are pushed to the logging server in regular intervals
4. The results of this experiment are obtained by post processing the acquired logs
Other relevant Information
The smartphone model used for the energy related experiments should be the Nexus 5, as
it simplifies the measurement procedure. It allows measuring the battery current and
current draw, and hence the power consumption of the device is derived by multiplying
both. The highest accuracy of the measurements can be achieved using the RaspberryPi
as uNaDa, as calibrated power models are available.
Version 1.0
Page 43 of 66
Copyright 2015, the Members of the SmartenIT Consortium
4 Showcases
During the Year 2 Technical Review with the EC the project team demonstrated a running
pilot implementation and major functionalities. Showcases can be considered as
preliminary experiments aimed at showing the basic behaviour of SmartenIT mechanisms
in a test-bed environment.
The three following showcases have been designed to validate selected aspects of the
SmartenIT implementation and to present the benefits of particular mechanisms:
4 ISPs
5.1.1 Traffic Generator
Page 44 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Traffic
generator
(sender)
AS2
Cloud A
DC-A
Traffic
receiver
Cloud B
DC-B
AS3
DC traffic
generator
sender
Traffic
receiver
AS1
SDN controller
Traffic
receiver
S-Box
S-Box
Traffic
generator
(sender)
AS4
BGP router
Intra-domain router
Inter-domain link
Intra-domain link
Table 16 Detailed IP Address table for the production network for the DTM evaluation.
IP Address or Address Range
Usage
10.0.0.0/8
Interconnection
10.0.1.0/24
Interconnection ISP1-ISP2
10.0.2.0/24
Interconnection ISP1-ISP4
10.1.6.0/24
Interconnection ISP2-ISP4
10.1.1.0/30
Interconnection ISP2-ISP3
10.10.1.0/24
ISP1
10.10.2.0/24
ISP1
10.10.3.0/24
ISP1
10.1.5.0/24
ISP2
10.1.3.0/24
ISP3
10.1.4.0/24
ISP3
10.1.2.0/24
ISP4
Version 1.0
Page 45 of 66
Copyright 2015, the Members of the SmartenIT Consortium
eth3
PC 1
eth0
vmmgmt
pc1-swvmgmt
vmmonitor
mgmt-vmsbox
mgmt-vmtr
mgmt-vmdc
KVM: dtm-isp1vmtr2
KVM: dtm-isp1vmdc
eth1: 10.10.2.3
eth1: 10.10.3.2/24
mgmt-vmtr
KVM: dtm-isp1vmsbox
KVM: dtm-isp1vmtr1
eth1: 10.10.1.4/24
eth1: 10.10.1.3/24
OVS:
dtm-isp1-ofs3
mgmt-br
eth2:
KVM (Vyatta): 10.10.2.2
dtm-isp1-rtr/24
bg2
OVS:
dtmisp1ofs1
eth1:
10.0.2.12/24
p1p1.201
eth2:
10.10.2.1
/24
OVS:
dtmisp1ofs2
eth1:
10.10.1.1
/24
eth2:
10.10.1.2
/24
KVM (Vyatta):
dtm-isp1-rtrbg1
eth1:
10.0.1.12/24
KVM (Vyatta):
dtm-isp1-rtrda
p1p2.201
PC 3
pc2-swcmgmt
PC 2
mgmt-br
mgmt-da
eth3:10
.10.3.1
/24
pc3-swvmgmt
mgmt-vmtg
mgmt-vmdc
KVM: dtm-isp2vmtg1
KVM: dtm-isp3vmdc
eth1: 10.1.5.2/24
eth1:10.1.4.1/24
mgmt
KVM: dtm-isp4-vmtg2
OVS:
dtm-isp4ofs1
eth1:
10.1.2.2/24
mgmt-br
mgmt
eth2:
10.1.2.1
/24
KVM (Vyatta):
dtm-isp4-rtr-bg2
eth1:
10.0.2.2/24
eth3:
10.1.6.2/24
p1p1.201
p1p2.201
KVM (Vyatta):
dtm-isp2-rtrbg1
eth4:
10.1.6.1
/24
eth2:
10.1.5.1
/24
eth3:
10.1.1.1
/30
eth1:
10.0.1.2
/24
OVS:
dtmisp2ofs1
OVS:
dtmips3ofs3
mgmt-br
eth1:
10.1.1.2
/30
eth2:
10.1.3.1/24
eth2.10:
10.1.4.10/24
eth2.20: empty
KVM (Vyatta):
dtm-isp3-rtr-bg1
p1p2.201
p1p1.201
mgmt-vmsbox
KVM: dtm-isp3vmsbox-sdn
eth1: 10.1.3.2/24
OVS:
dtmips3ofs1
OVS:
dtmips3ofs2
eth2
eth3:
10.1.3.3
/24
KVM:
dtm-isp3ofsda
eth1.10
eth1.20
eth3
Page 46 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Services
Comment
dtm-isp1-rtr-bg1
dtm-isp1-rtr-bg2
dtm-isp1-rtr-da
dtm-isp1-vmdc
dtm-isp1-vmsbox
S-Box
dtm-isp1-vmtr1
dtm-isp1-vmtr2
dtm-isp2-rtr-bg1
dtm-isp2-vmtg1
dtm-isp3-rtr-bg
dtm-isp3-ofda
dtm-isp3-vmdc
dtm-isp3-vmsbox-sdn
dtm-isp4-rtr-bg2
dtm-isp4-vmtg2
Version 1.0
Page 47 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Page 48 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Figure 17 Graph from the visualization application presenting real-time cost estimation.
During the showcase presentation some burst of background traffic on link 2 was
generated (highlighted in the Figure 18 with red circle). As could be observed such a
sudden disruption in traffic distribution caused that any new flow from DC (manageable
traffic) was redirected to tunnel passing link 1 (high difference between total and
background traffic on link 1 in the top graph of Figure 18). In the bottom graph of the
Figure 18 a deviation from the reference vector can be easily observed, however the
execution of DTM compensation procedure ensured that the reference vector was shortly
met.
Version 1.0
Page 49 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Figure 18 Selected graphs from the visualization application with highlight on traffic burst
compensation.
The DTM showcase clearly presents the benefits of implementing the DTM inside the
network in terms of inter-domain traffic costs reduction.
Page 50 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Page 51 of 66
Copyright 2015, the Members of the SmartenIT Consortium
of the SmartenIT test-bed; Table 11 and Table 12 present used IP address range and
services deployed on a particular VM.
Table 18 Overview on the SmartenIT test-bed extensions used with RB-HORST.
Scenario
Required test-bed extensions as defined
in D4.1
3 ISPs
5.3.1 Raspberry Pi
George and Andri own a Google Nexus 5 smartphone, with at least Android OS
v4.4.4, and HORST, Facebook and Firefox applications installed.
Andris and Georges uNaDas are Raspberry Pis, running the RB-HORST software
and offering access-point capabilities, while Sergios uNaDa is a virtual machine,
only hosting the RB-HORST software.
HORST* SSIDs are open, with no Internet access and are used for the
communication of the HORST Android application with the uNaDa.
Each user is the owner of his uNaDa, meaning that he has logged in to the RBHORST service with his Facebook credentials.
Andri and George are Facebook friends, and watch similar videos. This means that
their uNaDas caches share some common videos and are considered as overlay
neighbors. Thus in the next iteration of overlay prediction, their newly-cached
contents are likely to be prefetched to each others uNaDa.
Sergios is not a Facebook friend with the other 2 users, but belongs to the same
domain as George. He has watched some content in the past, which are cached
locally and his uNaDa participates in the uNaDas overlay network.
Page 52 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Finally receives the private SSID (RBH_secured) credentials and connects to it.
In the meantime, Andris uNaDa has verified that George is one of Andris friends and can
be considered as a trusted user to connect to the private SSID and browse the Internet.
The outcomes of the showcase are presented in:
The HORST Android application, which shows the evolution of the authentication
process and finally, the connection to the private SSID, and,
The uNaDa web interface, which presents the authentication attempt by a user
different than Andri, and finally, the successful outcome.
Version 1.0
Page 53 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Overlay prediction
The overlay prediction showcase, assumes that the reference scenario has been
completed, and a newly cached content (Italy - A 1 Minute Journey), has been stored in
Andris uNaDa cache. Now George returns home and connects to his home gateway and
all the following prediction showcases occur at his uNaDa.
As stated in the showcase assumption, Andris and Georges uNaDas are overlay
neighbors, which means that they already share some common videos, and their overlay
prediction algorithms will predict that newly-watched/cached videos should be prefetched.
Hence, the overlay prediction in Georges uNaDa predicts that the Vimeo video Italy - A 1
Minute Journey is likely to be watched again. Although the video exists in Andris uNaDa,
it is preferred to be fetched from the Vimeo CDN servers, because they are closer than
Andris uNaDa.
When George watches the video again, his video request is proxied by the uNaDa, and
the video is served from the uNaDa server, resulting in rapid video buffering and higher
QoE than the reference scenario.
Page 54 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Georges Firefox web browser, in which the video buffers instantly during playback,
and,
Georges uNaDa, which shows that the content is prefetched from Vimeo servers
instead of Andris uNaDa, and Georges request is intercepted and served by the
local uNaDa content server.
Social Prediction
Andri watches another video Brussels in 1 minute from his home and posts it to his
Facebook wall.
Social prediction in Georges uNaDa predicts that this video is likely to be watched by
George, and should be prefetched. This video has also been watched by Sergios and is
cached to Sergios uNaDa, which belongs in the same domain as George. Hence the
video is fetched from Sergios uNaDa, instead of Vimeo servers, resulting in inter-domain
traffic savings.
When George checks his Facebook News Feed, Andris Facebook post appears and
George tries to watch the posted Vimeo video. His uNaDa proxies his request and serves
the video from the local content server, resulting in better QoE than the reference
scenario.
Version 1.0
Page 55 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Georges uNaDa web interface, which shows the outcome of the social prediction
and the content source selection (Sergios uNaDa instead of Vimeo servers) and
the content delivery by the local uNaDa content server.
Page 56 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
efficiency for end users by distributing and moving content and services in an energy
aware way, while taking provider's interests into account. Finding the right content
placement is an optimization problem to be solved in an energy efficient manner. In order
to optimize the placement with respect to energy efficiency, information on energy
consumption is needed. This data can be derived from energy models providing an energy
estimate of the placement. Energy models which estimate energy consumption from
network measurements are existing in literature. Moreover, modelling the energy efficiency
of placements allows also for the prediction of future energy consumptions. Also the
Operator Focused Scenario (OFS) has the goal of achieving highest operating efficiency in
terms of low energy consumption besides other optimization criteria. E.g. Cloud Federation
enables collaboration so as to achieve both individual and overall improvement of cost and
energy consumption. Moreover, data migration/re-location may often be imposed by the
need to reduce overall energy consumption within the federation by consolidating
processes and jobs to few DCs only. Therefore, the OFS scenario defines a series of
interesting problems to be addressed by SmartenIT, specifically energy efficiency for
DCOs, either individually or overall for all member of the federation. To this end, the EEF
demo showcases the SmartenIT energy framework. It consists of the Energy Analyzer
which offers energy consumption considerations, thereby achieving an energy-efficient
network management approach.
A precondition for optimizing energy consumption is the measurement of energy
consumption. Thus, a measurement platform for energy consumption based on validated
energy models is integrated into the RB-HORST showcase. The measurement of energy
consumption without the need of additional measurement hardware is demonstrated and
the measurements are visualized.
4.3.1 Scenario topology
The network configuration for the EEF demo is based on the multi-ISP scenario. Three
ISPs are used with a total of three uNaDas. ISP2 hosts one uNaDa, ISP3 hosts 2 uNaDas,
one of which is running on a Raspberry Pi, and one is running in a VM.
Version 1.0
Page 57 of 66
Copyright 2015, the Members of the SmartenIT Consortium
The network topology and test-bed mapping is identical to the one used for the RBHORST showcase as described in Section 3.2.1.
4.3.2 Scenario assumptions
The experiment setup and scenario assumptions are depicted in Figure 24. In particular,
the following assumptions are made:
Each smartphone is equipped with a tunnel using the 3G interface to the test-bed
Note: RB HORST is considered to be one possible application under test for this demo. It
is not argued that RB HORST interfaces with EEF. However, both RB HORST and DTM
can use the EEF framework to optimize for energy efficiency.
The energy monitor constantly measures and delivers system parameters to the Energy
Analyzer converting the measured data into energy estimates using validate models
measured by TUD for the SmartenIT project. The energy estimates are visualized together
with the topology shown above.
4.3.3 Reference scenario
The Mobile Internet Access Offloading in EEF/RB-HORST is compared to the case of
regular downloads using a 3G or 4G cellular connection. The metrics to be compared are
the download speed and the power consumption of the mobile device, combined with an
estimate of the backend power consumption.
The reference scenario is established as follows: The user downloads a video using the
3G connection. The video is delivered via 3G. At the same time, the energy consumption
of the smart phone is monitored and visualized. This is represented as the red connection
marked in Figure 25, not making use of the uNaDa.showcase scenario.
4.3.4 Showcase scenario
The showcase scenario is depicted in Figure 25.
Step 1 (Using WiFi offloading instead of 3G):
The user downloads the same video using the RB-HORST WiFi. While viewing the
video, it is cached in the local RB-HORST cache. At the same time, the energy
consumption of the smart phone and the uNaDa is monitored and visualized sideby-side. The smart phone shows a better energy efficiency than in the reference
case. At the same time, the uNaDa shows higher energy consumption as traffic on
the WiFi interface is generated.
Page 58 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Version 1.0
Page 59 of 66
Copyright 2015, the Members of the SmartenIT Consortium
5 Summary
The core of this document is a set of experiment definitions which will be used to drive the
prototype validation activities. As the project covers two complementary domains, the one
where an end-user has a key role and the second one dominated by an ISP and a cloud
service provider or a data center, the experiments represent respective characteristics.
The OFS (Operator Focused Scenario) experiments have been designed to validate the
prototype implementation of DTM. Two main experiment scenarios have been proposed to
prove the optimization benefit of DTM mechanism :
evaluation of multi-domain traffic cost reduction in DTM with data transfer between
two locations (distant Data Center/cloud resources),
evaluation of multi-domain traffic cost reduction in DTM with data transfer between
multiple locations (DCs/clouds serving as traffic sources and receivers).
The EFS (End-user Focused Scenario) experiments are focused on the implementation of
RB-HORST mechanism. They mainly validate optimization techniques for caching and
WiFi offloading with the use of social network information and energy efficiency
measurement. The following three EFS experiments have been designed:
The second EFS experiment is especially interesting and promising because it will be
executed in cooperation with students who will be using the RB-HORST implementation.
Moreover, the deliverable D4.2 includes the descriptions of showcases organised during
the second year technical review with the EC. The SmartenIT project presented functions
of the pilot implementation in a test-bed prepared for the event. The work on showcases
was a direct input to the further efforts on experiment definitions.
Each experiment definition and showcase contains sufficient set of details to set up and
execute the software by the project team responsible for validation. In the next stage, the
results of experiments will be analysed to assess the SmartenIT solutions.
Page 60 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
6 SMART Objectives
Through this document, four SmartenIT SMART objectives defined in Section B1.1.2.4 of
the SmartenIT Description of Work (DoW) have been partially addressed. Namely, one
overall (O4, see Table 19) and three practical (O2.2, O3.1 and O3.4; see Table 20)
SMART objectives were addressed.
The overall Objective 4 is defined in the DoW as follows:
Objective 4 SmartenIT will evaluate use cases selected out of three real-world scenarios
(1) inter-cloud communication, (2) global service mobility, and (3) exploiting
social networks information (QoE and social awareness) by theoretical
simulations and on the basis of the prototype engineered.
This deliverable provides the definitions of experiments for OFS and EFS scenarios
(Section 3). Both scenarios were proposed as an outcome of the analysis and integration
process conducted in WP2. The OFS and EFS experiments include the characteristics of
all three scenarios listed in Objective 4. The initial evolutions of use cases with the
selected network traffic management solutions have been already performed during the
Year 2 Review meeting as the showcases (Section 4). Mainly they showed the available
functionalities. The advanced experiment definitions described in D4.2 provide required
details to run advanced functional and performance tests of the implemented SmartenIT
solutions.
Table 19: Overall SmartenIT SMART objective addressed.
Objective
No.
O4
Measurable
Specific
Deliverable
Number
Evaluation of use
cases
Timely
Achievable
Relevant
Mile Stone
Number
Implementation,
evaluation
Complex
MS4.3
Objective
ID
Specific
O2.2
Which parameter
settings are
reasonable in a given
scenario/application
for the designed
mechanisms to work
effectively?
Metric
Number of
parameters
identified, where a
reasonable value
range is specified
Timely
Achievable
Relevant
Design,
simulation,
prototyping
T2.2., T3.4,
T4.2
Highly relevant
output of
relevance for
providers and
users
Version 1.0
Project
Month
M24
Page 61 of 66
Copyright 2015, the Members of the SmartenIT Consortium
Measurable
Objective
ID
O3.1
O3.4
Specific
Timely
Achievable
Relevant
Which techniques to
be used to retrieve
management
information from cloud
platforms and OSNs?
Number of studied
cloud providers,
number of identified
types of
management
information,
number of
compared retrieval
techniques, number
of studied OSNs,
number of identified
types of social
information or
meta-information
related to users
social behaviour
Design
T1.1., T4.1,
T4.2
Highly relevant
output of
relevance for
providers
How to monitor
energy efficiency and
take appropriate
coordinated actions?
Number of options
identified to monitor
energy
consumption on
networking
elements and end
users mobile
devices,
investigation on
which options
perform best
(yes/no)
Design,
simulation,
prototyping
T1.3, T2.3,
T4.1, T4.2,
T4.4
Metric
Project
Month
M24
Highly relevant
output of
M3.6
relevance for
users
given
Page 62 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
Objective 3.4: How to monitor energy efficiency and take appropriate coordinated
actions?
The answer to the question about energy efficiency monitoring is provided in the
experiment which utilises The Energy Efficiency Measurement Framework (EEF). It
compares the energy consumption of WiFi and cell data transmissions under
realistic bandwidth conditions with the use of RB-HORST.
Version 1.0
Page 63 of 66
Copyright 2015, the Members of the SmartenIT Consortium
7 References
[1]
[2]
[3]
[4]
[5]
Page 64 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
8 Abbreviations
3G
Third Generation
AGH
AS
Autonomous System
BGP
CDN
CPU
DA
DC
Data Center
DCO
DoW
Description of Work
DTM
EEF
EFS
End-user-Focused Scenario
GRE
HTTP
ICOM
IP
Internet Protocol
IRT
Interroute S.P.A
ISP
KPI
MONA
M-to-M
Multiple to Multiple
NSP
OFS
Operator-Focused Scenario
OSN
OVS
Open vSwitch
PC
Personal Computer
PSNC
QoE
Quality of Experience
QoS
Quality of Service
RAM
Random-Access Memory
REST
Version 1.0
Page 65 of 66
Copyright 2015, the Members of the SmartenIT Consortium
RB-HORST
RTT
S-to-S
Single to Single
SDN
SEConD
SMART
SNMP
SSID
TUD
UDP
uNaDa
UniWue
UZH
University of Zrich
WiFi
Wireless Fidelity
vINCENT
Virtual Incentives
VM
Virtual Machine
9 Acknowledgements
This deliverable was made possible due to the large and open help of the WP4 team of the
SmartenIT team within this STREP, which includes besides the deliverable authors as
indicated in the document control, Krzysztof Wajda (AGH), Gino Carrozzo (IRT) and
Burkhard Stiller (UZH) as well for providing valuable feedback and input.
Page 66 of 66
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium