Você está na página 1de 23

Real world issues in cloud computing

Varsha Prabhu, 13CO250

1. Introduction
Cloud computing model
In the cloud deployment model, networking, platform, storage, and software infrastructure
are provided as services that scale up or down depending on the demand. The Cloud
Computing model has three main deployment models which are:
1.1 PRIVATE CLOUD
Private cloud is a new term that some vendors have recently used to describe offerings that
emulate cloud computing on private networks. It is set up within an organizations internal
enterprise datacenter. In the private cloud, scalable resources and virtual applications
provided by the cloud vendor are pooled together and available for cloud users to share and
use. It differs from the public cloud in that all the cloud resources and applications are
managed by the organization itself, similar to Intranet functionality. Utilization on the private
cloud can be much more secure than that of the public cloud because of its specified internal
exposure. Only the organization and designated stakeholders may have access to operate on a
specific Private cloud.
1.2 PUBLIC CLOUD
Public cloud describes cloud computing in the traditional mainstream sense, whereby
resources are dynamically provisioned on a fine-grained, self-service basis over the Internet,
via web applications/web services, from an off-site third-party provider who shares resources
and bills on a fine-grained utility computing basis. It is typically based on a pay-per-use
model, similar to a prepaid electricity metering system which is flexible enough to cater for
spikes in demand for cloud optimization. Public clouds are less secure than the other cloud
models because it places an additional burden of ensuring all applications and data accessed
on the public cloud are not subjected to malicious attacks.
1.3 HYBRID CLOUD
Hybrid cloud is a private cloud linked to one or more external cloud services, centrally
managed, provisioned as a single unit, and circumscribed by a secure network. It provides
virtual IT solutions through a mix of both public and private clouds. Hybrid Cloud provides
more secure control of the data and applications and allows various parties to access
information over the Internet. It also has an open architecture that allows interfaces with other
management systems. Hybrid cloud can describe configuration combining a local device,
such as a Plug computer with cloud services. It can also describe configurations combining
virtual and physical, collocated assets -for example, a mostly virtualized environment that
requires physical servers, routers, or other hardware such as a network appliance acting as a
firewall or spam filter.

1.4 CLOUD COMPUTING SERVICES


Cloud computing services are available across the entire computing spectrum. The basic
services of cloud have been considered as the following.
Software as a service (SaaS): SaaS reassign programs to millions of users all the way
through browser. For user, this can save some cost on software and servers. For Service
providers, they only need to maintain one program, this can also saves space and cost. SaaS
provider naturally hosts and manages a given application in their own or leased datacenters
and makes it available to multiple tenants and users using the Web.
Platform as a Service (PaaS): PaaS is an application development and deployment platform
provided as a service to developers over the Web. Middlemans equipment can be used to
develop programs and transfer it to the end users through internet and servers. The cost and
complexity of development and deployment of applications can be reduced to a great extent
by developers by using this service. Thus the developers can reduce the cost of buying and
reduce the complexity of managing the required Infrastructure. It provides all of the services
essential to support the complete life cycle of building and delivering web applications and
all the services entirely available from the Internet. This platform consists of infrastructure
software, a database, middleware, and development tools.
Infrastructure as a Service (IaaS): IaaS is the delivery of associated software and hardware
as a service. Hardware like server, storage and network, and associated software like
operating systems, virtualization technology and file system. It is an evolution of traditional
hosting to allow users to provide resources on demand and without require any long term
commitment. Different PaaS services, the IaaS provider does very little management other
than keep the data center operational and end-users must deploy and manage the software
services themselves-just the way they would in their own data center.
The cloud technology is used by many people in their daily lives. Using web based email
services or preparing any document over the internet is a common example of the cloud
technology. In the IT industry, there are three different types of cloud computing such as
infrastructure as a service (IaaS), platform as a service (PaaS), Software as a service (SaaS).
All these types of cloud technologies are being used for the different kind of services. Cloud
technology is very useful in business development as it brings astonishing results in a timely
manner.
However, there is a minor gap between the success and failure in businesses. Selection of the
right technology takes your business to new heights while a few mistakes land your business
in troubles. Every technology comes with a baggage of some pros and cons. Similarly, cloud
computing too comes with its share of issues despite being core strength of some business

industries. It also can create some major problems under some rare circumstances. issues and
challenges of cloud computing are characterized as ghosts in the cloud.

2. Literature Survey
Several studies have been carried out relating to security issues in cloud computing. Like,
Gartner identified seven security issues that need to be addressed before enterprises consider
switching to the cloud computing model. They are as follows: (i) privileged user access information transmitted from the client through the Internet poses a certain degree of risk,
because of issues of data ownership; enterprises should spend time getting to know their
providers and their regulations as much as possible before assigning some trivial applications
first to test the water; (ii) regulatory compliance - clients are accountable for the security of
their solution, as they can choose between providers that allow to be audited by third party
organizations that check levels of security and providers that don't (iii) data location
depending on contracts, some clients might never know what country or what jurisdiction
their data is located (iv) data segregation - encrypted information from multiple companies
may be stored on the same hard disk, so a mechanism to separate data should be deployed by
the provider. (v) recovery - every provider should have a disaster recovery protocol to protect
user data (vi) investigative support - if a client suspects faulty activity from the provider, it
may not have many legal ways pursue an investigation (vii) long-term viability - refers to the
ability to retract a contract and all data if the current provider is bought out by another firm.
The Cloud Computing Use Case Discussion Group discusses the different Use Case scenarios
and related requirements that may exist in the cloud model. They consider use cases from
different perspectives including customers, developers and security engineers . ENISA
investigated the different security risks related to adopting cloud computing along with the
affected assets, the risks likelihood, impacts, and vulnerabilities in the cloud computing may
lead to such risks . In 2009, Balachandra et al discussed the security SLAs specification and
objectives related to data locations, segregation and data recovery. In 2010, Bernd et al
discussed the security vulnerabilities existing in the cloud platform. The authors grouped the
possible vulnerabilities into technology-related, cloud characteristics-related, security
controls related. Subashini et al discuss the security challenges of the cloud service delivery
model, focusing on the SaaS model. Ragovind et al discussed the management of security in
Cloud computing focusing on Gartners list on cloud security issues and the findings from the
International Data Corporation enterprise . Morsy et al investigated cloud computing
problems from the cloud architecture, cloud offered characteristics, cloud stakeholders, and
cloud service delivery models perspectives in 2010 . A recent survey by Cloud Security
Alliance (CSA)&IEEE indicates that enterprises across sectors are eager to adopt cloud
computing but that security are needed both to accelerate cloud adoption on a wide scale and
to respond to regulatory drivers. It also details that cloud computing is shaping the future of
IT but the absence of a compliance environment is having dramatic impact on cloud
computing growth . Although there are several studies those have been carried out relating to
security issues in cloud computing, but this work presents a detailed analysis of the cloud

computing security issues and challenges focusing on the cloud computing deployment types
and the service delivery types.

3. Outcome of Literature Survey


4.1 Security Issues
a) Availability of ServiceMost of the Organizations are some cautions of cloud computing and sufficient availability is
existing in Utility Computing services. In this view, all available SaaS products have a high
standard. Users expect high accessibility from cloud facilities; it is very attractable for large
customers with business-continuity opportunity to transfer to Cloud Computing in critical
situations. It is possible to accept for different companies to provide independent software
stacks for them, but it is very hard for single organization to justify maintain and create more
tan one stack in the name of software dependability. And one more availability problem is
Distributed Denial of Service (DDoS) attacks Attackers makes use of large botnets to reduce
the profits of SaaS providers by DDoS by making their services unavailable . A long botnet
attack may be difficult to maintain, since the longer an attack lasts the easier it is to uncover
and defend against, and on the same provide, these attacking bots could not be immediately
re-used for other attacks. These attacks are shifts by cloud computing to the Utility
Computing provider from the SaaS provider. In this, who can more willingly absorb it and it
also maintains DDOS protection in this competency.
b) Botnet hostingBot malware typically takes advantage of system vulnerabilities and software bugs or
hacker-installed backdoors that allow malicious code to be installed on machines without the
owners consent or knowledge. They then load themselves into computers often for nefarious
purposes. Machines infected with bot malware are then turned into zombies and can be used
as remote attack tools or to form part of a botnet under the control of the botnet controller.
Zombies are compromised machines waiting to be activated by their command and control
(C&C) servers. The C&C servers are often machines that have been compromised and
arranged in a distributed structure to limit traceability. Cybercriminals could potentially abuse
cloud services to operate C&C servers to carry out distributed denial-of-service (DDoS)
attacks, which are attacks from multiple sources targeting specific websites by flooding a web
server with repeated messages, tying up the system and denying access to legitimate users, as
well as other cyber criminal activities. In December 2009, for example, a new wave of a
Zeus bot (Zbot) variant was spotted taking advantage of Amazon EC2s cloud-based services
for its C&Cfunctionalities (Ferrer 2009: np).
c) Data security-

Data security is another important research topic in cloud computing. Since service providers
does not have permission for access to the physical security system of data centers. But they
must depend on the infrastructure provider to get full data security. In a virtual private cloud
environment, the service provider can only specify the security setting remotely, and we
dont know exactly those are fully implemented. In this Process, the infrastructure provider
must reach the following objectives:
(1) confidentiality, for secure data transfer and access, and
(2) auditability, whether applications security setting has been tampered or not.
Confidentiality is usually achieved using cryptographic protocols, unencrypted data in a local
data center in not secure compare to the encrypted data in before place into cloud.
Auditability can be achieved using remote attestation techniques and it could be added as an
extra level away from of the virtualized guest Operating System, in one logical layer maintain
some responsible software related to confidentiality and auditability. Remote attestation
typically requires a trusted platform module (TPM) to generate non-forgeable system
summary as the proof of system security. In virtual environment, Virtual Machine can change
dynamically the location from one to other . It is very difficult to construct trust mechanism
in every architectural layer of the cloud. Virtual Machine migration should only if both
source and destination servers are trusted .
d) Launch pad for brute force and other attacks
There have also been suggestions that the virtualized infrastructure can be used as a
launching pad for new attacks. A security consultant recently suggested that it may be
possible to abuse cloud computing services to launch a brute force attack (a strategy used to
break encrypted data by trying all possible decryption key or password combinations) on
various types of passwords. Using Amazon EC2 as an example, the consultant estimated that
based on the hourly fees Amazon charges for its EC2 web service, it would cost more than
$1.5m to brute force a 12-character password containing nothing more than lower-case letters
a through z an 11-character code costs less than $60,000 to crack, and a 10-letter phrase
costs less than $2,300 (Goodin 2009: np). Although it is still relatively expensive to perform
brute force online password guessing attacks (also known as online dictionary attacks), this
could have broad implications for systems using password based authentication. It may not
take long for attackers to design a more practical
e) Rogue clouds
Just like entrepreneurs, cyber criminals and organised crime groups are always on the lookout
for new markets and with the rise of cloud computing, a new sector for exploitation now
exists. Rogue cloud service providers based in jurisdictions with lax cybercrime legislation
can provide confidential hosting and data storage services for a usually steep fee. Such
services could potentially be abused by organised crime groups to store and distribute
criminal data (eg child abuse materials for commercial purposes) to avoid the scrutiny of law
enforcement agencies. Hosting confidential business data with cloud service providers

involves the transfer of a considerable amount of management control to cloud service


providers that usually results in diminished control over security arrangements. There is the
risk of rogue providers mining the data for secondary uses such as marketing and reselling
the mined data to other businesses. A June 2009 email survey of 220 decision-makers in US
organisations with more than 1,000 employees highlighted similar concerns. In the survey,
40.5 percent of the respondents agreed/strongly agreed that he trend toward using SaaS and
cloud computing solutions in the enterprise seriously increases the risk of data leakage
(Proofpoint 2009: 24). Unfortunately, clients (especially SMEs) are often less aware of the
risks and may not have an easy way of determining whether a particular cloud service
provider is trustworthy. Tim Watson, head of the computer forensics and security group at De
Montfort University remarked that one provider may offer a wonderfully secure service and
another may not, if the latter charges half the price, the majority of organisations will opt for
it as they have no real way of telling the difference (Everett 2009: 7).
f) Espionage risks
There is increasing pressure for nationstates to develop cyber-offensive capabilities. The next
wave of cyber-security threats could potentially be targeted attacks aimed at specific
government agencies and organisations, or individuals within enterprises including cloud
service providers. For example, Google and several Gmail accounts belonging to Chinese and
Tibetan activists have reportedly been targeted (Google 2010; Helft & Markoff 2010).
Foreign intelligence services and industrial spies may not disrupt the normal functioning of
an information system as they are mainly interested in obtaining information relevant to vital
national or corporate interests. They do so through clandestine entry into computer systems
and networks as part of their information-gathering activities. Cloud service providers may be
compelled to scan or search data of interest to national security and to report on, or monitor,
particular types of transactional data as these data may be subject to the laws of the
jurisdiction in which the physical machine is located (Gellman 2009). In addition, overseas
cloud service providers may not be legally obliged to notify the clients (owners of the data)
about such requests.
g) Data availability (business continuity)
A major risk to business continuity in the cloud computing environment is loss of internet
connectivity (that could occur in a range of circumstances such as natural disasters) as
businesses are dependent on the internet access to their corporate information. In addition, if
vulnerability is identified in a particular service provided by the cloud service provider, the
business may have to terminate all access to the cloud service provider until they could be
assured that the vulnerability has been rectified. There are also concerns that the seizure of a
data-hosting server by law enforcement agencies may result in the unnecessary interruption
or cessation of unrelated services whose data is stored on the same physical machine. In a
recent example, FBI agents seized computers from a data center at 2323 Bryan Street in
Dallas, Texas, attempting to gather evidence in an ongoing investigation of two men and their
various companies accused of defrauding AT&T and Verizon for more than US$6 million

(Lemos 2009: np). This resulted in the unintended consequence of disrupting the continuity
of businesses whose data and information are hosted on the seized hardware. LiquidMotors,
a company that provides inventory management to car dealers, the servers held its client data
and hosted its managed inventory services. The FBI seizure of the servers in the data center
rack effectively shut down the company, which filed a lawsuit against the FBI the same day
to get the data back (Lemos 2009: np) While the above example may be an isolated case, it
raised concerns about unauthorised access to seized data not related to the warrant, which can
result in the unintended disclosure of data to unwanted parties, particularly in authoritarian
countries. Australian Institute of Criminology 4 There had been a number of reported
incidents of cloud services being taken offline due to DDoS attacks (see Metz 2009).
Although DDoS attacks already existed, the cloud computing environment is a new attack
sector that may have a more widespread impact on internet users. The security measures
adopted by different cloud service providers varies. If a cybercriminal can identify the
provider whose vulnerabilities are the easiest to exploit, then this entity becomes a highly
visible target. The lack of security associated with this single entity threatens the entire cloud
in which it resides (Kaufman 2009: 63)
g)Attacks targeting shared-tenancy environment
A virtual machine (VM) is the software implementation of a computer that runs its own
operating system and application as if it was a physical machine (VMWare 2009). Multiple
VMs can concurrently run different software applications on different operating system
environments on a single physical machine. This reduces hardware costs and space
requirements. In a shared-tenancy cloud computing environment, data from different clients
can be hosted on separate VMs but reside on a single physical machine. This provides
maximum flexibility. Software applications running in one VM should not be able to impact
or influence software running in another VM. An individual VM should be unaware of the
other VMs running in the environment as all actions are confined to its own address space. In
a recent study, a team of computer scientists from the University of California, San Diego
and Massachusetts Institute of Technology examined the widely-used Amazon EC2 services.
They found that it is possible to map the internal cloud infrastructure, identify where a
particular target VM is likely to reside, and then instantiate new VMs until one is placed
co-resident with the target (Ristenpart et al. 2009: 199). This demonstrated that the research
team were able to load their eavesdropping software onto the same servers hosting targeted
websites (Hardesty 2009). By identifying the target VMs, attackers can potentially monitor
the cache (a small allotment of high-speed memory used to store frequently-used
information) Australian Institute of Criminology 3 in order to steal data hosted on the same
physical machine (Hardesty 2009). Such an attack is also known as a side-channel attack. The
findings from this research may only be a proof-of-concept at this stage, but it raises concerns
about the possibility of cloud computing servers being a central point of vulnerability that can
be criminally exploited. The Cloud Security Alliance, for example, listed this as one of the
top threats to cloud computing. Attacks have surfaced in recent years that target the shared
technology inside Cloud Computing environments. Disk partitions, CPU caches, GPUs, and

other shared elements were never designed for strong compartmentalization. As a result,
attackers focus on how to impact the operations of other cloud customers, and how to gain
unauthorized access to data. (Cloud Security Alliance 2010: 11)
h) VM-based malware
Vulnerabilities in VMs can be exploited by malicious code (malware) such as VM-based
rootkits designed to infect both client and server machines in cloud services. Rootkits are
cloaking technologies usually employed by other malware programs to abuse compromised
systems by hiding files, registry keys and other operating system objects from diagnostic,
antivirus and security programs. For example, in April 2009, a security researcher pointed out
how a critical vulnerability in VMwares VM display function could be exploited to run
malware, which allows an attacker to read and write memory on the host operating system
(Keizer 2009: np). VM-based rootkits, as pointed out by Price (2008: 27), could be used by
attackers to gain complete control of the underlying OS without the compromised OS being
aware of their existence are especially dangerous because they also control all hardware
interfaces. Once the VM-based rootkits are installed on the machine, they can view
keystrokes, network packets, disk state, and memory state, while the compromised OS
remains oblivious.

4.2. Data Issues


a ) Data Lock-In

Software stacks have better interoperability between platforms, but customers feels difficult
to extract their programs and data from one location to run on another. Some organizations
are concerned about extracting data from a cloud due to which they dont opt for cloud
computing. Customer lock-in seems to be striking to Cloud Computing providers. Cloud
Computing users are more worried about increase in price, consistency problems, or even to
providers leaving out of business. SasS developers could take the advantage of deploying the
services and data on multiple Cloud Computing providers so that failure of a single company
does not affect the customer data. The only fear is that they are much worried about the cloud
pricing and flatten the profits. We offer two advices to relieve this alarm. First, the quality is
also important compare to the price , then customers will not not attract and offer to the
lowest cost service. Second, It Concerns data lock-in justification, APIs standardization leads
to a new model for Private Cloud and a Public Cloud with same software infrastructure
usage. This option Available in Surge Computing, due to high workloads it is not easy to
run extra tasks in private clouds compare to the public cloud .
) Data Transfer Bottlenecks
b
Applications maintain to be converted into additional data-intensive. The applications are
moved across the boundaries of clouds may complicate data placement and transport. Cloud
providers and users have to feel about to minimize costs on the concept of the traffic and the
implications of placement at each level of the system. One provision is to overcome the high

cost of bandwidth transfers is to ship disks. It is one more provision to keep maintains data in
the cloud. If the data is in the cloud, it may not be a bottleneck to enable a new service. If
archived data is in the cloud, selling of Cloud computing cycles with the new services
become possible, such in all your archival data by creating searchable indices or group the
images according to who appears in each photo by performing image recognition on all your
archived photos. A third opportunity is to minimize the cost of network bandwidth more
quickly. One estimate is that one-third is the fiber cost where as two-thirds of the cost of
WAN bandwidth is the cost of the high-end routers. Centralized control routers instead of the
high-end distributed routers. If this technology were deployed in WAN then WAN costs
dropping more quickly.
c) Traffic management and analysis
Analysis of data traffic is important for todays data centers. However, there are several
challenges for existing traffic measurement and analysis methods in Internet Service
Providers (ISPs) networks and enterprise to extend to data centers. Firstly, the density of links
is much higher than that in ISPs or enterprise networks, which makes the worst case scenario
for existing methods. Secondly, most existing methods can compute traffic matrices between
a few hundred end hosts, but even a modular data center can have several thousand servers.
Finally, existing methods usually assume some flow patterns that are reasonable in Internet
and enterprises networks, but the applications deployed on data centers significantly change
the traffic pattern. Further, there is tight coupling of applications use to network, computing,
and storage resources, than what is present in other settings. Currently, the work on
measurement and analysis of data center traffic is very less. Greenberg et al. report data
center traffic characteristics on flow sizes and concurrent flows, and use these to guide
network infrastructure design. Benson et al. perform a complementary study of traffic at the
edges of a data center by examining SNMP traces from routers.
d) Reputation Fate Sharing
Reputations may not suit well for virtualization. Reputation of the cloud as a whole may be
affected by one customers bad behavior. In hosted smaller ISPs, we are offering with some
cost is the trusted email services and with some experience in of this problem to create
reputation-guarding services. Another important concept in cloud computing providers legal
issue is transfer of legal liability and it maintain by customer.

4.3 Performance Issues


a) Virtual machine migration
Virtual Machine migration provides major benefit in cloud computing through load balance
across data centers. It also provides robust and high response in data centers. Virtual machine
migration was extended from process migration. Avoiding hotspots is major benefit of VM
migration even though it is not straight forward. Initiating a migration lacks the facility to
respond to unexpected workload changes and detecting workload hotspot. It should be

transferred effectively in migrating process the workload in memory state. During the transfer
it maintains consistency for applications by considering resources and physical servers.
b) Server consolidation
In a cloud computing environment, Server consolidation is an efficient approach is to
minimize the energy consumption for makes best use of resource utilization. In a single
server to consolidate VMs residing on multiple under-utilized servers by using Live VM
migration technology and all the remaining servers can be set to an energy-saving state.
Server consolidation is not depend on application performance .It is known as the resource
usage means individual VMs may vary time to time. Result in resource congestion by change
of footprint in VM on the server. Sharing of resources (i.e., disk I/O, bandwidth and memory
cache) among VMs on server leads resource congestion. This information is useful for
effective server consolidation and the fluctuations of VM footprints. When the resource
congestions are occur, then the system react quickly.
c) Performance Unpredictability
Sharing I/O is complex in cloud computing while multiple virtual machines can share CPU
and main memory easily. Virtualization of I/O interrupts and channel is one solution to
improve the efficiency of operating system and improve architecture. Some of the
technologies like PCI express which are very critical to the cloud are difficult to virtualize.
Flash memory could be an attractive alternate with reduced I/O interference. It can provide
possibility for multiple virtual machines to share I/O per second with more storage capacity.
Multiple virtual machines with random input/output workloads can coexist in a single
physical machine without the interference. Another unpredictability problem concerns to
scheduling of virtual machines for various classes of batch processing programs, exclusively
for high performance computing . The complication to attracting HPC is not the use of
clusters; the majority of parallel computing at present is done in huge clusters using the
message-passing interface MPI. The problem is that various HPC applications require to
ensure that all the threads of a program are running simultaneously and todays virtual
machines and operating systems do not provide a programmer-visible way to ensure this.
Gang Scheduling can be used to overcome the obstacle in cloud computing. d) Scalable
Storage Cloud Computing important properties are: infinite capacity on-demand, no up-front
cost, short-term usage. This is an open research problem, is not only create storage system to
meet these issues. And consider cloud advantages of programmer expectations and scaling
arbitrarily up and down on-demand in regard to resource organization for high availability,
data durability and scalability.
e) Bugs in Large-Scale Distributed Systems
Another challenge issue in Cloud Computing is 2011 Global Journals Inc. (US) Global
Journal of Computer Science and Technology Volume XI Issue XI Version I 2011 61
Research Issues in Cloud Computing removing errors in these large scale distributed systems.
The debugging of these bugs have to be done at large scale in the production data centers as

these bugs cannot be reproduced in smaller configurations. One prospect may be to depend
on virtual machines in Cloud Computing. SaaS providers developed their infrastructure
without VMs. SaaS providers did not opt for VMs as they preceded the recent popularity of
VMs or as they felt they could not afford the performance of VMs. Since VMs are very
important in Utility Computing, that level of virtualization may make it possible to capture
valuable information in ways that are unlikely without VMs.
f ) Scaling Quickly
Pay-as-you-go certainly applies to network bandwidth and storage on the basis of used bytes
count. Depending on the virtualization level, computation is slightly different. Google
AppEngine automatically scales in response to load up and down, and users are charged by
usage of the cycles. AWS charges on for the number of instances you occupy by the hour
without checking if your machine is idle or not. One more opportunity without violating
service level agreements is automatically scale quickly increase and decrease in reply to load
in order to save money, but another reason for scaling is to save money as well as resources.
About two-thirds of the power in idle computers use compare to the busy computer.
Currently, in datacenters on the environment receiving a great deal of negative attention But
careful use of resources could reduce the impact. Cloud Computing providers already get low
overhead and careful accounting of resource consumption. By imposing to pay attention to
encourages programmers in the concept of efficiency are utility computing, per-hour and
per-byte costs, development inefficiencies, and allows more direct measurement of
operational.
g) Latency
Latency is a research issue on the Internet. Any performance in the cloud is going the same
meaning of the performance of the result on the client. The latency in a cloud introduces not
to be tedious. The latency is compressed back for understand how and where theyre running
with both smartly-written applications and an intelligently planned infrastructure. In future,
cloud computing capacity and cloud based applications are rapidly increases and latency is
also increases. Cloud computing latency must be improved in the desktop PC is largest
bottlenecks in the memory and storage.

4.4 Design Issues


a) Energy management
Improving energy efficiency is another major issue in cloud computing. It has been estimated
that the cost of powering and cooling accounts for 53% of the total operational expenditure of
data centers . Infrastructure providers are to reduce energy consumption under enormous
pressure. In data centers, the aim are only to reduce energy is cost. Data centers are recently
received considerable attention on designing energy-efficient data centers. This approach
leads to several directions. For example, energy efficient hardware architecture that enables

slowing down CPU speeds and turning off partial hardware components has become
commonplace. Energy-aware job scheduling and server consolidation are two other ways to
reduce power consumption by turning off unused machines. Current research on
energy-efficient network protocols and infrastructures . To achieve a good trade-off between
energy savings and application performance is key challenge. Some of the researchers have
recently work on the solutions for performance and power management in a dynamic cloud
environment.
b) Software frameworks
Cloud computing provides a persuasive platform for hosting significant data-intensive
applications. In this, Hadoop for scalable and faulttolerant data processing by using
applications leverage Map Reduce frameworks concept. A MapReduce job is highly
dependent on the type of the application and show that the performance and resource
consumption. All Hadoop nodes have heterogeneous characteristics which are allocated by
Virtual Machine. Hence, it is possible to optimize the performance and cost of a MapReduce
application by carefully selecting its configuration parameter values and designing more
efficient scheduling algorithms . In the bottleneck resources, It significantly improved by
execution time of applications. The design challenges include performance modeling of
Hadoop jobs in all the possible cases, and dynamic conditions in adaptive scheduling.
MapReduce frameworks energy-aware is another approach to turn Hadoop node into sleep
mode after it has finished its work while waiting for new assignments. Made energy-aware
are Hadoop and HDFS. Some researchers are still working on a trade-off between
performance and energy-awareness.
c) Novel cloud architectures
Large data centers and operated in a centralized fashion is implemented as commercial
clouds. In this design achieves high manageability and economy-of-scale. But Some
limitations in large constructing data centers such as high initial investment nd high energy
expense. Some researchers suggests that small size data centers can be more advantageous
than big data centers in many cases: a small data center consume less power, So it does not
require a powerful and high expensive cooling system; Global Journal of Computer Science
and Technology Volume XI Issue XI Version I 2011 62 2011 Global Journals Inc. (US)
Research Issues in Cloud Computing Comparatively to build geographically the large data
centers are cheaper than small data centers. Content delivery and interactive gaming are
time-critical services July in response of Geo- diversity. For example, Valancius et al.
studied the feasibility of hosting video-streaming services using application gateways.
Another related research work is on using voluntary resources (i.e. resources given by users)
for hosting cloud applications. Clouds built for more suitable for non-profit applications such
as scientific computing and it is a mixture of voluntary and dedicated resources are much
cheaper to operate. In this architecture specifies the design challenges such frequent churn
events and managing heterogeneous resources.

d) Software Licensing
Existing commercial software licenses usually control on which computer the software can
run. Users can pay an annual maintenance fee and initially payment for the software. For
Cloud Computing applications, this existing commercial licensing approach for software is
not fine and many cloud computing providers are relying on open source software . The
primary solution for commercial software companies is to better fit Cloud Computing by
change their licensing structure. The challenge is software companies to sell products into
Cloud Computing by support sales services . Pay- as-you-go seems may not fit with the target
analysis used to evaluate usefulness, which is based on single purchases. The solution to this
challenge is cloud providers to offer at discount new plans for bulk use.
e) Client incomprehension
Were probably not in the days where people used to think that clouds were just big clusters
of servers, but that doesnt mean were free of ignorant. We are aware of the fact that the
cloud is moving forward. There are many misunderstandings about how private and public
clouds work together, how virtualization and cloud computing overlap and also how to move
from one kind of infrastructure to another and so on. A good way to clear these is to present
users / customers with real-time examples of what is possible and why. Which clears their
understanding on actual work thats been done and not just hypothetical is where theyre left
to fill in the blanks themselves.
f ) Ad-hoc standards as the only real standards Amazon EC2 is the biggest example of this.
As convenient as it is to develop for the cloud using EC2 as one of the most common types of
deployments, its also something to be cautious. On the optimistic side, they bootstrap
adoption. That is look how quickly a whole culture of cloud computing has sprung up around
EC2. On the negative side, it means that much less space for innovators to create something
open, to let things break away from the ad-hoc standards and can be adopted on their own.

4.4 Miscellaneous Issues


a) Costing Model:
Cloud consumers must consider the tradeoffs amongst computation, communication, and
integration. While migrating to the Cloud can significantly reduce the infrastructure cost, it
does raise the cost of data communication, i.e. the cost of transferring an organization's data
to and from the public and community Cloud and the cost per unit of computing resource
used is likely to be higher. This problem is particularly prominent if the consumer uses the
hybrid cloud deployment model where the organization's data is distributed amongst a
number of public/private (in-house IT infrastructure)/community clouds. Intuitively, on
demand computing makes sense only for CPU intensive jobs .
b) Charging Model:

The elastic resource pool has made the cost analysis a lot more complicated than regular data
centers, which often calculates their cost based on consumptions of static computing.
Moreover, an instantiated virtual machine has become the unit of cost analysis rather than the
underlying physical server. For SaaS cloud providers, the cost of developing multitenancy
within their offering can be very substantial. These include: redesign and redevelopment of
the software that was originally used for single-tenancy, cost of providing new features that
allow for intensive customization, performance and security enhancement for concurrent user
access, and dealing with complexities induced by the above changes. Therefore, a strategic
and viable charging model for SaaS provider is crucial for the profitability and sustainability
of SaaS cloud providers .
c) Service Level Agreement (SLA):
Although cloud consumers do not have control over the underlying computing resources,
they do need to ensure the quality, availability, reliability, and performance of these resources
when consumers have migrated their core business functions onto their entrusted cloud.
Typically, these are provided through Service Level Agreements (SLAs) negotiated between
the providers and consumers. The very first issue is the definition of SLA specifications in
such a way that has an appropriate level of granularity, namely the tradeoffs between
expressiveness and complicatedness, so that they can cover most of the consumer
expectations and is relatively simple to be weighted, verified, evaluated, and enforced by the
resource allocation mechanism on the cloud. In addition, different cloud offerings (IaaS,
PaaS, and SaaS) will need to define different SLA metaspecifications. This also raises a
number of implementation problems for the cloud providers. Furthermore, advanced SLA
mechanisms need to constantly incorporate user feedback and customization features into the
SLA evaluation framework . E. Cloud Interoperability Issue: Currently, each cloud offering
has its own way on how cloud clients/applications/users interact with the cloud, leading to the
"Hazy Cloud" phenomenon. This severely hinders the development of cloud ecosystems by
forcing vendor locking, which prohibits the ability of users to choose from alternative
vendors/offering simultaneously in order to optimize resources at different levels within an
organization. More importantly, proprietary cloud APIs makes it very difficult to integrate
cloud services with an organization's own existing legacy systems (e.g. an on premise data
centre for highly interactive modeling applications in a pharmaceutical company).The
primary goal of interoperability is to realize the seamless fluid data across clouds and
between cloud and local applications. There are a number of levels that interoperability is
essential for cloud computing. First, to optimize the IT asset and computing resources, an
organization often needs to keep in-house IT assets and capabilities associated with their core
competencies while outsourcing marginal functions and activities (e.g. the human resource
system) onto the cloud. Second, more often than not, for the purpose of optimization, an
organization may need to outsource a number of marginal functions to cloud services offered
by different vendors. Standardization appears to be a good solution to address the
interoperability issue. However, as cloud computing just starts to take off, the interoperability
problem has not appeared on the pressing agenda of major industry cloud vendors .

d) Data Security concern


When we talk about the security concern of the cloud technology, then a lot of questions
remain unanswered. Multiple serious threats like virus attack and hacking of the clients site
are the biggest cloud computing data security issues. Entrepreneurs have to think on these
issues before adopting cloud computing technology for their business. Since you are
transferring your companys important details to a third party so it is important to ensure
yourself about the manageability and security system of the cloud.
e) Selecting the perfect cloud setup
Choosing the appropriate cloud mechanism as per the needs of your business is very
necessary. There are three types of clouds configuration such as public, private, and hybrid.
The main secret behind successful implementation of the cloud is picking up the right cloud.
If you are not selecting the right cloud then maybe you have to face some serious hazards.
Some companies having vast data so they prefer private clouds while small organisations
usually use public clouds. A few companies like to go for a balanced approach with hybrid
clouds. Choose a cloud computing consulting service which is aware and clearly disclose the
terms and conditions regarding cloud implementation and data security.

f) Real time monitoring requirements


In some agencies, it is required to monitor their system in real time. It is compulsory term for
their business that they continuously monitor and maintain their inventory system. Banks and
some government agencies need to update their system in real time but cloud service
providers are unable to match this requirement. This is really a big challenge for cloud
services providers.

g) Resolving the stress


Every organisation wants to have a proper control and access over the data. It is not easy to
handover your precious data to a third party. The main tension between enterprise and
executives is they desire to control over the new modes of operations while using technology.
These tensions are not unsolvable, but they do suggest that providers and clients alike must
deliberately address a suite of cloud challenges in the planning, contracting and managing the
services.

h) Reliability on new technology


It is a fact of human nature that we trust on the things present in front of our eyes. Normally
entrepreneurs feel hesitation in letting out the organisational information to any unknown
service provider. They think that information stored in their office premises is more secure
and easily accessible. By using cloud computing they have fear of losing control over the
data. They think that data is taken from them and handover to an unknown third party.
Security threads are increased as they do not know and where is the information stored and

processed. These frights of the unknown service providers must very amicably be dealt with
and eliminated form their minds.
i) Dependency on service providers
For uninterrupted services and proper working it is necessary that you acquire a vendor
services with proper infrastructural and technical expertise. An authorized vendor who can
meet the security standards set by your companys internal policies and government agencies.
While selecting the service provider you must carefully read the service level agreement and
understand their policies and terms and provision of compensation in case of any outage or
lock in clauses.
j) Cultural obstacles
High authority of the company and organisational culture has also become a big obstacle in
the proper implementation of the cloud computing. Top authority never wants to store the
important data of the company somewhere else where they are not able to control and access
the data. They have misconceptions in their minds that cloud computing puts the organisation
at the risk by seeping out important details. Their mindset is such that the organization on risk
averse footing, which makes it more reluctant to migrate to a cloud solution.
k) Cost barrier
For efficient working of cloud computing you have to bear the high charges of the bandwidth.
Business can cut down the cost on hardware but they have to spend a huge amount on the
bandwidth. For smaller application cost is not a big issue but for large and complex
applications it is a major concern. For transferring complex and intensive data over the
network it is very necessary that you have sufficient bandwidth. This is a major obstacle in
front of small organisations, which restrict them for implementing cloud technology in their
business.
l) Lack of knowledge and expertise
Every organisation does not have sufficient knowledge about the implementation of the cloud
solutions. They have not expertise staff and tools for the proper use of cloud technology.
Delivering the information and selection the right cloud is quite difficult without right
direction. Teaching your staff about the process and tools of the cloud computing is a very
big challenge in itself. Requiring an organisation to shift their business to cloud based
technology without having any proper knowledge is like asking for disaster. They would
never use this technology for their business functions.
m) Consumption basis services charges
Cloud computing services are on-demand services so it is difficult to define specific cost for a
particular quantity of services. These types of fluctuations and price differences make the
implementation of cloud computing very difficult and complicated. It is not easy for a normal
business owner to study consistent demand and fluctuations with the seasons and various

events. So it is hard to budget for a service that could consume several months of budget in a
few days of heavy use.
n) Alleviate the threats risk
It is very complicated to certify that the cloud service provider meet the standards for security
and threat risk. Every organisation may not have enough mechanism to mitigate these types
of threats. Organisations should observe and examine the threats very seriously. There are
mainly two types of threat such as internal threats, within the organisations and external
threats from the professional hackers who seek out the important information of your
business. These threats and security risks put a check on implementing cloud solutions.
o) Unauthorised service providers
Cloud computing is a new concept for most of the business organisations. A normal
businessman is not able to verify the genuineness of the service provider agency. Its very
difficult for them to check the whether the vendors meet the security standards or not. They
have not an ICT consultant to evaluate the vendors against the worldwide criteria. It is
necessary to verify that the vendor must be operating this business for a sufficient time
without having any negative record in past. Vendor continuing business without any data loss
complaint and have a number of satisfied clients. Market reputation of the vendor should be
unblemished.
p) Hacking of brand
Cloud computing carries some major risk factors like hacking. Some professional hackers are
able to hack the application by breaking the efficient firewalls and steal the sensitive
information of the organisations. A cloud provider hosts numerous clients; each can be
affected by actions taken against any one of them. When any threat came into the main server
it affects all the other clients also. As in distributed denial of service attacks server requests
that inundate a provider from widely distributed computers.

q) Recovery of lost data


Cloud services faces issue of data loss. A proper backup policy for the recovery of data must
be placed to deal with the loss. Vendors must set proper infrastructures to efficiently handle
with server breakdown and outages. All the cloud computing service providers must set up
their servers at economically stable locations where they should have proper arrangements for
the backup of all the data in at least two different locations. Ideally they should manage a hot
backup and a cold backup site.
r) Data portability
Every person wants to leverage of migrating in and out of the cloud. Ensuring data portability
is very necessary. Usually, clients complain about being locked in the cloud technology from
where they cannot switch without restraints. There should be no lock in period for switching
the cloud. Cloud technology must have capability to integrate efficiently with the on
premises. The clients must have a proper contract of data portability with the provider and
must have an updated copy of the data to be able to switch service providers, should there be
any urgent requirement.
s) Cloud management
Managing a cloud is not an easy task. It consist a lot of technical challenges. A lot of
dramatic predictions are famous about the impact of cloud computing. People think that
traditional IT department will be outdated and research supports the conclusions that cloud
impacts are likely to be more gradual and less linear. Cloud services can easily change and
update by the business users. It does not involve any direct involvement of IT department. It
is a service providers responsibility to manage the information and spread it across the
organisation. So it is difficult to manage all the complex functionality of cloud computing

s) Dealing with lock-ins


Cloud providers have an important additional incentives to attempt to exploit lock-ins. A
prefixed switching cost is always there for any company receiving external services. Exit
strategies and lock-in risks are primary concerns for companies looking to exploit cloud
computing.
t) Transparency of service provider
There is no transparency in the service providers infrastructure and service area. You are not
able to see the exact location where your data is stored or being processed. It is a big
challenge for an organisation to transfer their business information to such an unknown
vendor.
u) Transforming the data into virtual set-up
Transition of business data from a premise set up to a virtual set up is a major issue for
various organisations. Data migration and network configuration are the serious problems
behind avoiding the cloud computing technology.
Popularization of cloud computing
The idea of cloud has been famous that there is a rush of implementing virtualization
amongst CIOs. This has led to more complexities than solutions.These are some common

problems regarding the cloud computing execution in real life. But the benefits of cloud
computing are more vast in compare to these hazards. So you should find the perfect
solutions and avail the huge benefits of cloud technology in your business. It can take your
business to the new heights!!

5. Conclusion
Cloud Computing emerged as a major technology to provide services over the Internet in
easy and efficient way. The main reason for possible success of cloud computing and vast
interest from organizations throughout the world is due to the broad category of services
provided with cloud. The cloud computing is making the utility computing into a reality. The
current technology does not provide all the requirements needed by the cloud computing.
There are many challenges to be addressed by the researchers for making cloud computing
work well in reality. Some of the challenges like security issues and Data issues are very
much required for the customers to use the services provided by the cloud. Similarly
challenges like Security, performance issues and other issues like energy management etc are
important for the service providers to improve the services. In this paper we have identified
the challenges in terms of security issues, data challenges, performance challenges and other
design challenges. We have provided an insight into the possible solutions to these problems
even though lot of work is needed to be done in this regard.

6. References
Jensen, M. (2009, September). On Technical Security Issues in Cloud Computing.
IEEE International Conference in Cloud Computing, 109-116.2.
P. Mell and T. Grance. The NIST Definition of Cloud Computing (Draft).
Available:3.
G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song,
Provable data possession at untrusted stores, In ACM CCS, pages 598-609, 2007.4.
G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, Scalable and efficient
provable data possession,SecureComm, 2008.5.
K.D. Bowers, A. Juels, and A. Oprea, HAIL: A high-availability and integrity layer
for cloud storage, Proc.16th ACM conference on Computer and communications
security, 2009, pp. 187-198.6.
S. Kandula, D. Katabi, M. Jacob, and A. Berger, Botz-4-sale: Surviving organized
ddos attacks that mimic flash crowds, In Proc. NSDI (2005).7.
A. Yaar, A. Perrig, and D. Song, Fit: Fast internet traceback, In Proc. IEEE Infocom
(March 2005).8.
S. Staniford, V. Paxson, and N.Weaver, How to own the internet in your spare time,
In Proc. USENIX Security(2002). Cisco data center infrastructure 2.5 design guide.
http://www.ijser.org/researchpaper/Cloud-Computing-Security-Issues-and-Challenges
.pdf
http://mnagarajan.com/Research/Cloud%20Computing.pdf

Você também pode gostar