Escolar Documentos
Profissional Documentos
Cultura Documentos
Computer Networks
journal homepage: www.elsevier.com/locate/comnet
a r t i c l e
i n f o
Article history:
Received 5 November 2014
Revised 8 June 2015
Accepted 27 August 2015
Available online 8 September 2015
Keywords:
Cloud computing
Cloud federation
Energy eciency
Energy costs saving
Energy sustainability
a b s t r a c t
Nowadays, the increasing interest in Cloud computing is motivated by the possibility to promote a new economy of scale in different contexts. In addition, the emerging concept of Cloud
federation allows providers to optimize the utilization of their resources establishing business partnerships. In this scenario, the massive exploitation of ICT solutions is increasing the
energy consumption of providers, thus many researchers are currently investigating new energy management strategies. Nevertheless, balancing Quality of Service (QoS) with both energy sustainability and cost saving concepts is not trivial at all. The growing interest in this
area has been highlighted by the increasing number of contributions that are appearing in
literature. Currently, most of energy management strategies are specically focused on independent Cloud providers, others are beginning to look at Cloud federation. In this paper, we
present a survey that helps researchers to identify the future trends of energy management in
Cloud federation. In particular, we select the major contributions dealing with energy sustainability and cost-saving strategies aimed at Cloud computing and federation and we present
a taxonomy useful to analyze the current state-of-the-art. In the end, we highlight possible
directions for future research efforts.
2015 Elsevier B.V. All rights reserved.
1. Introduction
Nowadays, there is an increasing interest regarding energy management in Cloud computing. The growing success
of Cloud computing is mainly due to its better exibility, reliability, and scalability than traditional information and communication technology (ICT) systems and the capability to
satisfy the worldwide demand of highly specialized and customized services. In the context of ICT processes, the cost of
energy is one of the major factors for determining the cost
of services provided to users. Typically, such a cost is due to
the management of the infrastructure and human resources.
439
generally uncorrelated: energy costs-saving and energy sustainability. Another interesting aspect is the energy market
(and the electrical energy market in particular) is becoming
highly dynamic, both in terms of cost and quality of provisioning. The advent of the free market and the increasing
widespread utilization of primary renewable energy sources
along with the continuous necessity to balance the distribution in power grids is changing the way to provide energy.
In the recent decade, new solutions for energy production
have modied the dynamics of the market, leading to new
forms of agreements between providers and consumers, in a
way that is very different from the past. By now, the management policies of the electricity market have been developed according to the automated demand response (ADR)
paradigm [3,4], where energy providers and consumers dynamically x their production/management policies according to the current market conditions. As a consequence, it is
possible to carry out a dynamic management of ICT services
following the ADR principles. In this panorama, considering
the ICT processes, the cost of energy is one of the major factors that has to be considered for pricing services offered to
end-users. Data processing, storage, and transport imply the
usage of a set of devices that consume electricity for different
purposes. This energy consumption is typically due to power
supply of ICT, electrical, and cooling equipments.
Currently, a wide variety of contributions are available in
literature focusing on energy eciency in Cloud computing.
However, many existing surveys have tried to assess the energy management issues only focusing on data centers (DCs)
needs, see [58]. Indeed, most of the analyzed scientic contributions focus on how to reduce the waste of energy in
DCs and they are not specic to the Cloud. Considering Cloud
computing, scientic contributions have regarded mainly solutions that are conned in the DC of an independent Cloud
provider managed by a specic administrative domain of an
440
computing and federation both in terms of energy sustainability and energy cost-saving, we present a taxonomy that
allows researchers to better understand these two main aspects. Currently, there are many ways to structure a taxonomy. In our opinion, many existing surveys use an approach
that is too dispersive. Many authors use many labels and
build several taxonomic trees to analyze the state of the art
of a particular research area. As a consequence, the reader
has to spend considerable time to understand the taxonomy. In our paper, we present a compacted taxonomic tree
that allows the reader to quickly analyze the state of the
art and classify existing scientic contributions. In particular,
our taxonomic tree represents a compact key of interpretation to quickly understand the current state of the art regarding energy management in Cloud computing. In this manner,
we hope our survey can be useful to researcher interested in
designing new energy management solutions for federated
Cloud ecosystems.
The rest of the paper is organized as follows. Section 2 describes our taxonomy. Section 3 and Section 4 respectively
analyze scientic contributions focusing on energy sustainability and energy cost-saving. A critical discussion of the
current state of the art and future research trends is presented in Section 5. Section 6 concludes the paper.
2. Taxonomic analysis
In this section, rst of all, we provide a few denitions
regarding the key concepts treated in this paper, and then,
we describe the taxonomic tree used to analyze the current
state of the art.
2.1. Cloud related terminology
In the following, we analyze the concepts of Independent
Cloud and Cloud federation.
441
scientic approach to better analyze the current state-of-theart. In order to achieve such a goal, we developed a taxonomic
tree highlighting the main involved aspects. The taxonomic
tree is depicted in Fig. 2. During the development of our taxonomic tree, we identied several sub-trees, under different
parent nodes, that had the same structure. However, we realized that the graphical representation of all these sub-trees
was not possible for space constrains because the taxonomic
tree became very huge. Therefore, in order to build a compact taxonomic tree, we adopted the notion of block to
442
About Processing-Fixed Cloud systems, for physical machines and clusters, the taxonomic tree highlights the typical elements consuming energy, that are Information Technology (IT), electrical (ELE), and cooling (COOL) equipments.
Regarding IT equipments, energy eciency can be improved
implementing specic strategies on task scheduling (i.e., reducing the active period of tasks), load balancing (i.e., in
order to reduce the total amount of energy wasting), peak
sharing (i.e., minimizing the peaks of workloads that occur
in cycling loads), ON/OFF triggers (i.e., turning ON/OFF a device to reduce the total amount of energy consumption), no
load power (i.e., minimizing the load power during idle cycles), split plane power policies (i.e., delivering separate power
supply to processor and north-bridge), clock gating (i.e., activating and deactivating the clock for reducing dynamic
power dissipation), and by using energy ecient components
(i.e., CPUs, memories, storage with high energy eciency life
cycle).
Regarding ELE equipments, energy management strategies include UPS ON/OFF and ECO-lighting policies. ECOlighting aims to reduce energy wasting by using specic
policies in the illumination management, such as temporized or automatized turning on/off of lamps. Uninterruptible
power supply (UPS) and battery/ywheel backup are electrical equipments that provide emergency power to a load
when the input power source fails.
Regarding COOL equipments, energy management policies mainly include Free cooling (i.e., a technique to extract
heat from DCs considering Temperature (T) and Relative Humidity (RH) of equipments) and the Heating, Ventilating, Air
Conditioning, and Refrigerating (HVAC(R)) activities [14].
Virtualization techniques allow systems to abstract physical resources thus to make them more exible in a virtual
form. Generally, a virtual machines (VM) is a virtual environment that emulates a physical environment. Regarding
this concept, our taxonomic tree distinguishes two blocks:
E and F. The E block identies the issues related to the
specic VM, such as IT equipments policies and VMs Allocation,
and Power Metering strategies. The F block extends the E
block to include the issues related to clusters of VMs, that are
VMs Migration and Redundant VMs management.
With reference to Processing-Mobile Cloud systems, the
taxonomic tree shows very different energy strategies for
physical and virtual resources. Physical resources can be
mobile devices or server able to support mobility. The H
block summarizes the energy eciency strategies operating
on IT, such as Resource management (i.e., optimizing interfaces, ports, tx/rx physical components), Computation ooading (i.e., delegating the computation to Cloud off-device), GPS
querying reduction (i.e., minimizing the energy consumption
due to the on-board GPS), Task delegation (i.e., delegating
tasks to Cloud infrastructure off-device) [15]. Regarding ELE
equipments for Physical device, power supply optimization
mainly involves advances in Battery technologies [16].
Mobile physical server node into the tree identies the
policies concerning a Cloud infrastructure in a mobile scenario. Specically, the J block includes the same elements
of the D block presented above, plus new components,
such as Matching algorithms [17] (i.e., nding the best matching routes to minimize the use of GPS query via Serverside), and Crowd-sourced systems [18] (i.e., crowd-sourced
443
444
(1)
(2)
DciE = 100
1
.
PUE
(3)
445
vironments, measuring energy consumption in Cloud environments based on different runtime tasks. They think to integrate their research results into Cloud systems to monitor
energy consumption and support static or dynamic systemlevel optimization. In [6], the authors describe interesting
metrics to make Cloud computing greener. Specically, they
discuss different power and energy models, identifying the
major challenges to build a green Cloud. To evaluate the
goodness of solutions and techniques to decrease CO2 emissions (e.g. utilization of green energy sources, reduction of
the number of physical and virtual machines, usage of the
greener machines), a set of metrics to measure the greenness of a Cloud infrastructure are presented in [38]. Yamagiwa et al. [39] propose a high performance environment
as a Green Cloud platform using solar power and low consumption electricity computers to overcome the problem of continuous energy requirements (this is undesirable in terms of
environmental concerns because of the increased CO2 emissions). Moreover, the authors consider possible solar panels and sealed lead acid battery necessary to support mobile
systems.
3.1.3. Virtual resource management in DCs
With specic regard to the Virtualization Technology, many
efforts try to increase the exibility of different types of Virtual Machine Managers (VMMs). The software that controls
the virtualization is usually called hypervisor, and the software program that emulates a specic hardware system is
commonly called Virtual Machine (VM). The introduced software layer emulates the operative system and allocates necessary hardware resources. In [40] the authors study VMs
allocation problem, that is how to allocate VMs in a DC in
order to full VM demands and to reduce the total amount
of energy consumption. Applying ecient VM placement algorithms allows Cloud providers to reduce carbon footprint
[41]. In [42], the authors consider the ecient use of virtualized resources by applications, and not only of the underlying infrastructure for energy eciency and CO2 -aware
Cloud computing. Cardosa et al. [43] propose a solution for
ecient allocation of VMs in virtualized heterogeneous computing environments. The authors have leveraged the min,
max and shared parameters of VMM, which represent minimum, maximum and proportion of the CPU allocated to VMs
sharing the same resource. In [44], the authors discuss power
provisioning and power tracking applications, presenting a solution, named Joulmeter, for VM power metering. Even if their
study is limited to a single Cloud, they show many interesting
aspects and discuss useful techniques for federated Clouds.
The authors of [45] propose energy-ecient resource allocation policies and scheduling algorithms based on QoS expectations and power usage features of the involved devices.
Moreover, they propose an architectural framework and basic principles for energy-ecient management of Clouds. In
particular, this framework manages VMs as modules that
can be dynamically started and stopped on a single physical
machine, according to the incoming requests. The proposed
policies improve the exibility in conguring resources and
running applications on different operating systems on the
same physical machine, satisfying different requirements
of service requests. Another important aspect for energy
saving in terms of resource management is the dynamical
446
Table 1
Energy sustainability in independent Clouds.
Taxonomic path
A-D-G-N
C-D-G-N
D-G-N
E-G-N
E-F-G-N
F-G-N
H-I-M-N
T
Total
Num
References
8
1
2
7
5
3
5
4
35
energy eciency in the development and deployment of applications in federated environments. The Eco2Clouds Consortium includes several European partners, whose research
groups have produced several signicant technical and scientic contributions. A description of new metrics and a
monitoring infrastructure based on the Zabbix [58] monitoring framework for eco-ecient Cloud federations are show
in [59]. In [60], the authors compare the EU CoolEmAll and
Eco2Clouds projects. They describe the metrics used in these
projects to assess energy-eciency of DCs and Cloud resources, and energy-costs of application/workload execution
for various DC granularity levels and federation sites.
In [61] the authors present a multi-objective genetic algorithm, named MO-GA, to optimize the energy consumption,
reduce CO2 emissions and generate prot in a geographically
distributed Cloud computing infrastructure. They propose a
Pareto resource allocation approach for Clouds focused on energy, green house gas emission and prot and use MO-GA to
nd the best scheduling according to the above mentioned
goals. Their work differs from other studies because it deals
with both computing and energy consumption in the proposed energy model, and their approach exploits the geographical distribution of a Cloud federation.
On completion of this section, and looking at possible
scenarios where a dynamic resource management could allow a reduction of emissions and greater energy savings, we
quote a model developed by Lawrence Berkeley National lab
and Northwestern University, named CLEER Model Cloud Energy and Emission Research Model [62]. It aims to provide a
comprehensive and user-friendly model able to assess the
net energy and GreenHouse Gas (GHG) emissions of Cloud
systems. GHG emissions derive from both the concrete use
of electricity along with the associated servers manufacturing compared to existing physical and digital systems that
they can replace. Specically, CLEER Model taxonomy consists of sub-models of utilization (e.g., DC operational energy
utilization and business-client IT device operational energy
utilization). For each sub-model there are other further submodels, able to represent many different solutions involving:
infrastructures, devices, networks segments, transportation,
applications (in both business and residential). The CLEER
Model calculates the cumulative network energy utilization
adding up the contributes in energy utilization of each segment by mathematical equations applied to network submodels based on Bagila et al. denitions [63]. Baliga et al. estimate the per-bit energy consumption of transmission and
switching for a public Cloud to be around 2.7 J/b, and for
a private Cloud to be around 0.46 J/b, claiming that transport presents a more signicant energy cost in public Cloud
services than in private Cloud services. At the same time,
they claim that power consumption in transport represents a
signicant proportion of total power consumption for Cloud
storage services at medium and high usage rates. The peculiarity which we found interesting in CLEER Model is mainly
the ability to make choices in a user-friendly and custom scenario for the use of Cloud systems, for example by selecting
the country where you locate your Cloud on your own needs.
So you can get information on carbon intensity of electricity source (kgCO2e /kWh) and on Primary Energy (MJ/kWh)
for different geographical areas, and so for different parameters (e.g. Average transmission energy source (J/bit)). CLEER
447
Table 2
Energy sustainability in federated Clouds.
Taxonomic path
Num
References
A-D-G-N
F-G-N
H-I-M-S
T
1
4
1
1
[63]
[11], [59], [61], [62]
[64]
[65]
Total
448
Table 3
Energy cost-saving in independent clouds.
Table 4
Energy cost-saving in federated Clouds.
Taxonomic path
Num
References
Taxonomic path
Num
References
A-D-G-N
C-D-G-N
D-G-N
F-G-N
T
1
1
1
1
1
[70]
[68]
[66]
[69]
[67]
A-E-G-N
B-D-G-N
C-D-G-N
F-G-N
1
1
1
3
[72]
[74]
[73]
[7], [71], [75]
Total
Total
A brief summary on the relationship between each contribution in the eld of cost-saving in Independent Clouds
and the related block in our taxonomic tree is reported in
Table 3.
4.2. Energy cost-saving in federated Clouds
The problem of energy management in a federated Cloud
environment is addressed in [69]. The authors show a modelling approach based on stochastic reward nets (SRN) to investigate the more convenient strategies to manage a Cloud
federation. They consider a scenario in which N federated IaaS
Clouds cooperate with each other in order to reduce energy
costs and to satisfy the users requests. Specically, the authors describe interesting indexes, that properly combined,
allows providers to reduce the energy costs. Hongxing Li et al.
[70] introduce an ecient algorithm for the scheduling of
resources in a federation of geo-distributed Clouds. In particular, the authors optimize the scheduling delay for each
job in order to maximize the prot for each involved Cloud.
However, they do not deepen how to minimize the energy
costs. In [71] the authors present very interesting policies to
address energy cost-saving strategies in federated Clouds. In
their policies they consider three cost factors: energy price,
peak power charge, and the energy consumed by the cooling
system. They consider that the electricity cost has two components: the cost of energy consumed (energy price), and
the cost for the peak power drawn at any particular time
(peak power price). They assume these costs are expressed
in dollars per kWh and that the peak power cost is dened as
the maximum power drawn within some accounting period.
Therefore, they assume there are two peak power charges,
one for the on-peak hours and one for off-peak hours. In
addition, they assert that the DC would be charged for 15
min with the highest average power drawn across all onpeak hours in the month, and for the 15 min with the highest power drawn across all off-peak hours in a period of 1
month. In the end, they conclude that cost aware load placement policies can signicantly push down providers operational costs. In this regard, the authors of [72] introduce an
experimental platform and show a wide variety of experimental evaluations, assuming xed energy price and peak
tariff. Buyya et al. [73] assert that next generation Cloud service providers should be able to dynamically expand or resize their provisioning capability, but not only. In particular
they evoke the factors that pose dicult problems in effective provisioning and delivery of application services, thus
evoke the criticality to nding ecient solutions for following challenges to exploiting the potential of federated Cloud
infrastructures. Therefore they refer to Flexible Mapping of Ser-
449
Table 5
References for the covered taxonomic paths of the examined solutions.
Taxonomic path
Independent Clouds
Sustainability
A-D-G-N
A-E-G-N
B-D-G-N
C-D-G-N
D-G-N
E-G-N
E-F-G-N
F-G-N
H-I-M-N
H-I-M-S
T
Total
Federated Clouds
TOT
Cost saving
Sustainability
Cost saving
8
0
0
1
2
7
5
3
5
0
4
1
0
0
1
1
0
0
1
0
0
1
1
0
0
0
0
0
0
4
1
0
1
0
1
1
1
0
0
0
3
0
0
0
10
1
1
3
3
7
5
11
6
0
6
35
53
be further explored to improve energy eciency in IT equipments for virtual machines (path A-E-G-N) and in electrical equipments for physical machines (path B-D-G-N). Important projects, such as Eco2Clouds in Europe, aim to research and propose solutions optimized for the reduction of
CO2 emissions, but aspects of the inuence of the techniques
and federation policies on the reduction of environmental
impact, however, still remain to be explored (path F-G-N). As
the energy consumption of a DC, at the same time, the energy
consumed for the transport of information across the network is important in energy saving, and cannot be neglected
in a complete investigation. Indeed, switches and networking
equipments lead to an increase in energy consumption compared with the baseline consumption. The usage of Cloudbased services produces an additional energy network consumption, because it increments the trac on the Internet,
and this is more signicant in public Clouds than privates, as
already discussed in this work. There are few contributions
that focus on reducing the energy consumption due to the
transport of information (path T). Moreover, they are often
limited to network devices within the DC or networking for
non-federated Cloud environments.
Cost saving is totally unexplored in the management of
virtual xed processing resources (paths E-G-N and E-F-GN), whereas many contributions in this elds are provided
for energy sustainability in independent Clouds. Cost saving
is also unexplored in mobile physical servers (path H-I-M-N).
In the scenario of federated Clouds, there are not contributions in literature (to the best of our knowledge) to
improve energy sustainability or energy cost-saving, when
virtual xed processing resource (paths D-G-N, E-G-N and EF-G-N) and mobile resources for storage (path H-I-M-S) are
managed.
The proposed taxonomic tree is very useful to identify unexplored strategies for energy saving in Clouds. Indeed, the
Taxonomic paths not present in Table 5 identify possible research activities that, at the best of our knowledge, are not
still investigated. Moreover, our taxonomy gives us the opportunity to think up innovative strategies specically designed for Clouds, especially in federated environments. For
example, not always the site with lower energy costs is the
most favorable to reduce the environmental footprint (e.g.,
CO2 equivalent emissions). In order to reduce pollution, more
450
451
452
Antonio Puliato Antonio Puliato is a full professor of computer engineering at the University
of Messina, Italy. His interests include parallel and distributed systems, networking, wireless, GRID and Cloud computing. During 1994
1995 he spent 12 months as visiting professor
at the Department of Electrical Engineering of
Duke University, North Carolina USA, where he
was involved in research on advanced analytical
modelling techniques. He is the coordinator of
the Ph.D. course in Advanced Technologies for Information Engineering currently available at the
University of Messina and the responsible for the
course of study in computers engineering. He was a referee for the European
Community for the projects of the fourth, fth and sixth Framework Program and he is currently acting as a referee also in the seventh FP. Puliato
is co-author (with R. Sahner and Kishor S. Trivedi) of the text entitled Performance and Reliability Analysis of Computer Systems: An Example-Based
Approach Using the SHARPE Software Package. He is currently the director
of the RFIDLab, a joint research lab with Oracle and Intel on RFID and wireless. From 2006 to 2008 he acted as the technical director of the Project 901,
aiming at creating a wireless/wired communication infrastructure (winner
of the CISCO innovation award). He is currently a member of the general assembly and of the technical committee of the FP7 Vision project.