Você está na página 1de 25

Acknowledgement

I have taken efforts in this report. However, it would not have been possible without the kind support and help of many individuals and my friends. I would like to extend my sincere thanks to all of them. I am highly indebted to mrs. Sasmitarani Behera for their guidance and constant supervision as well as for providing necessary information regarding the report & also for their support in completing the report. I would like to express my gratitude towards my parents & my friends for their kind co-operation and encouragement which help me in completion of this report. I would like to express my special gratitude and thanks to my teachers for giving me such attention and time. My thanks and appreciations also go to my colleague in developing the report and people who have willingly helped me out with their abilities.

01

Table of Contents Page No. Abstract


1. Introduction

03

Cloud computing Evolution of cloud computing Characteristics of cloud computing 2. Models of cloud computing System models Deployment models 3. Cloud computing architecture Virtualization Architecture The intercloud Cloud engineering

04 05 07

09 12

15 17 18 19

4. Data storage & security Secure Data Storage in Cloud Computing Cloud security Information security Infrastructure security 19 20 21 22 22 23 24 02

5. Business value of cloud computing Conclusion References

Abstract Cloud computing is the use of computing resources (hardware and software) that are
delivered as a service over a network (typically the Internet). The name comes from the common use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user's data, software and computation. Cloud Computing is the result of evolution and adoption of existing technologies and paradigms. The goal of Cloud Computing is to allow users to take benet from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The Cloud aims to cut costs, and help the users focus on their core business instead of being impeded by IT obstacles. The main enabling technologies for Cloud Computing are virtualization and autonomic computing. Virtualization abstracts the physical infrastructure, which is the most rigid component, and makes it available as a soft component that is easy to use and manage. By doing so, virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. On the other hand, autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process and reduces the possibility of human errors. Cloud computing relies on sharing of resources to achieve coherence and economies of scale similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared services. Cloud computing has gained a lot of hype in the current world of I.T. Cloud computing is said to be the next big thing in the computer world after the internet. Cloud computing is the use of the Internet for the tasks performed on the computer and it is visualized as the nextgeneration architecture of IT Enterprise. The Cloud represents the internet. Cloud computing is related to several technologies and the convergence of various technologies has emerged to be called cloud computing. In comparison to conventional ways Cloud Computing moves application software and databases to the large data centers, where the data and services will not be fully trustworthy. In this article, I focus on secure data storage in cloud; it is an important aspect of Quality of Service. To ensure the correctness of users data in the cloud, I propose an effectual and adaptable scheme with salient qualities. This scheme achieves the data storage correctness allow the authenticated user to access the data and data error localization, i.e., the identification of misbehaving servers.

03

Introduction
Cloud computing is the use of computing resources (hardware and software) that
are delivered as a service over a network (typically the Internet). The name comes from the common use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user's data, software and computation.

Figure01: cloud computing logical diagram

End users access cloud-based applications through a web browser or a light-weight desktop or mobile app while the business software and user's data are stored on servers at a remote location. Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of infrastructure. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand.

04

In the business model using software as a service (SaaS), users are provided access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis. SaaS providers generally price applications using a subscription fee.Proponents claim that the SaaS allows a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and personnel expenses, towards meeting other IT goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS is that the users' data are stored on the cloud provider's server. As a result, there could be unauthorized access to the data. Cloud computing relies on sharing of resources to achieve coherence and economies of scale similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared service.

Evolution of cloud computing


This section reviews the history of cloud computing and introduces the IBM vision for cloud computing that supports dynamic infrastructures. The following section introduces an infrastructure framework for a data canter and discusses the virtualized environment and infrastructure management. Subsequently, existing cloud infrastructures and their applications are described. Cloud computing is an important topic. However, it is not a revolutionary new development, but an evolution that has taken place over several decades, as shown in Figure 02.

Figure02: Evolution toward cloud computing

05

The trend toward cloud computing started in the late 1980s with the concept of grid computing when, for the first time, a large number of systems were applied to a single problem, usually scientific in nature and requiring exceptionally high levels of parallel computation. That said, its important to distinguish between grid computing and cloud computing. Grid computing specifically refers to leveraging several computers in parallel to solve a particular, individual problem, or to run a specific application. Cloud computing, on the other hand, refers to leveraging multiple resources, including computing resources, to deliver a service to the end user. In grid computing, the focus is on moving a workload to the location of the needed computing resources, which are mostly remote and are readily available for use. Usually a grid is a cluster of servers on which a large task could be divided into smaller tasks to run in parallel. From this point of view, a grid could actually be viewed as just one virtual server. Grids also require applications to conform to the grid software interfaces. In a cloud environment, computing and extended IT and business resources, such as servers, storage, network, applications and processes, can be dynamically shaped or carved out from the underlying hardware infrastructure and made available to a workload. In addition, while a cloud can provision and support a grid, a cloud can also support nongrid environments, such as a three-tier Web architecture running traditional or Web 2.0 applications. In the 1990s, the concept of virtualization was expanded beyond virtual servers to higher levels of abstractionfirst the virtual platform, including storage and network resources, and subsequently the virtual application, which has no specific underlying infrastructure. Utility computing offered clusters as virtual platforms for computing with a metered business model. More recently software as a service (SaaS) has raised the level of virtualization to the application, with a business model of charging not by the resources consumed but by the value of the application to subscribers. The concept of cloud computing has evolved from the concepts of grid, utility and SaaS. It is an emerging model through which users can gain access to their applications from anywhere, at any time, through their connected devices. These applications reside in massively scalable data centers where compute resources can be dynamically provisioned and shared to achieve significant economies of scale. Companies can choose to share these resources using public or private clouds, depending on their specific needs. Public clouds expose services to customers, businesses and consumers on the Internet. Private clouds are generally restricted to use within a company behind a firewall and have fewer security exposures as a result. The strength of a cloud is its infrastructure management, enabled by the maturity and progress of virtualization technology to manage and better utilize the underlying resources through automatic provisioning, re-imaging, workload rebalancing, monitoring, systematic change request handling and a dynamic and automated security and resiliency platform.

06

Characteristics of cloud computing


Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way that a traditional user interface (e.g., a computer desktop) facilitates interaction between humans and computers. Cloud computing systems typically use Representational State Transfer (REST)-based APIs. Cost is claimed to be reduced, and in a public cloud delivery model capital expenditure is converted to operational expenditure. This is purported to lower barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is finegrained with usage-based options and fewer IT skills are required for implementation (inhouse).The e-FISCAL project's state of the art repository contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.

Device and location independence enable users to access systems using a web
browser regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.

Virtualization

technology allows servers and storage devices to be shared and utilization be increased. Applications can be easily migrated from one physical server to another.

Multitenancy enables sharing of resources and costs across a large pool of users thus
allowing for: Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.) Peak-load capacity increases (users need not engineer for highest possible load-levels) Utilisation and efficiency improvements for systems that are often only 1020% utilised.

Security

could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. However, the complexity of security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems that are being shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.

07

Reliability is improved if multiple redundant sites are used, which makes well-designed
cloud computing suitable for business continuity and disaster recovery.

On-demand self-service allows users to obtain, configure and deploy cloud services
themselves using cloud service catalogues, without requiring the assistance of IT. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.

Broad network access

Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstation.

Resource pooling

The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.

Rapid elasticity Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.

Measured service. Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

Maintenance of cloud computing applications is easier, because they do not need


to be installed on each user's computer and can be accessed from different places.

Performance

is monitored and consistent and loosely coupled architectures are constructed using web services as the system interface.

Scalability and elasticity via dynamic ("on-demand") provisioning of resources on


a fine-grained, self-service basis near real-time, without users having to engineer for peak loads.

08

Models of cloud computing System models


Cloud computing providers offer their services according to several fundamental models: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) where IaaS is the most basic and each higher model abstracts from the details of the lower models. Other key components in XaaS are described in a comprehensive taxonomy model published in 2009,such as Strategy-as-a-Service, Collaboration-as-a-Service, Business Process-as-a-Service, Database-as-a-Service, etc. In 2012, network as a service (NaaS) and communication as a service (CaaS) were officially included by ITU (International Telecommunication Union) as part of the basic cloud computing models, recognized service categories of a telecommunication-centric cloud ecosystem. Infrastructure as a service (IaaS) Platform as a service (PaaS) Software as a service (SaaS)

Infrastructure as a service (IaaS)


In the most basic cloud-service model, providers of IaaS offer computers - physical or (more often) virtual machines - and other resources. (A hypervisor, such as Xen or KVM, runs the virtual machines as guests. Pools of hypervisors within the cloud operational support-system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements.) IaaS clouds often offer additional resources such as a virtual-machine disk image library, raw (block) and file-based storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles. IaaS-cloud providers supply these resources on-demand from their large pools installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis. Examples of IaaS providers include: Amazon EC2, Azure Services Platform, DynDNS, Google Compute Engine, HP Cloud, iland, Joyent, LeaseWeb, Linode, NaviSite, Oracle Infrastructure as a Service, Rackspace Open Cloud, ReadySpace Cloud Services, ReliaCloud, SAVVIS, SingleHop, and Terremark.

09

Figure03:system models

Platform as a service (PaaS)


In the PaaS model, cloud providers deliver a computing platform typically including operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers. With some PaaS offers, the underlying computer and storage resources scale automatically to match application demand such that cloud user does not have to allocate resources manually. Examples of PaaS include: AWS Elastic Beanstalk, Cloud Foundry, Heroku, Force.com, EngineYard, Mendix, OpenShift, Google App Engine, Windows Azure Cloud Services and OrangeScape.

10

Figure04: functions of different system models in cloud computing

Software as a service (SaaS)


In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user's own computers, which simplifies maintenance and support. Cloud applications are different from other applications in their scalabilitywhich can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access point. To accommodate a large number of cloud users, cloud applications can be multitenant, that is, any machine serves more than one cloud user organization.

11

It is common to refer to special types of cloud based application software with a similar naming convention: desktop as a service, business process as a service, test environment as a service, communication as a service. The pricing model for SaaS applications is typically a monthly or yearly flat fee per user, so price is scalable and adjustable if users are added or removed at any point.

Figure05: layered infrastructure

12

Deployment models
Public cloud Community cloud Hybrid cloud Private cloud

Public cloud
Public cloud applications, storage, and other resources are made available to the general public by a service provider. These services are free or offered on a pay-per-use model. Generally, public cloud service providers like Amazon AWS, Microsoft and Google own and operate the infrastructure and offer access only via Internet (direct connectivity is not offered).

figure06: Cloud computing types

13

Community cloud Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third-party and hosted internally or externally. The costs are spread over fewer users than a public cloud (but more than a private cloud), so only some of the cost savings potential of cloud computing are realized.

Figure07: Deployment models

Hybrid cloud Hybrid cloud is a composition of two or more clouds (private, community or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models. Such composition expands deployment options for cloud services, allowing IT organizations to use public cloud computing resources to meet temporary needs. This capability enables hybrid clouds to employ cloud bursting for scaling across clouds.

14

Cloud bursting is an application deployment model in which an application runs in a private cloud or data canter and "bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an organization only pays for extra compute resources when they are needed.Cloud bursting enables data canters to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands. By utilizing "hybrid cloud" architecture, companies and individuals are able to obtain degrees of fault tolerance combined with locally immediate usability without dependency on internet connectivity. Hybrid cloud architecture requires both on-premises resources and off-site (remote) server-based cloud infrastructure. Hybrid clouds lack the flexibility, security and certainty of in-house applications. Hybrid cloud provides the flexibility of in house applications with the fault tolerance and scalability of cloud based services.

Private cloud Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally. Undertaking a private cloud project requires a significant level and degree of engagement to virtualize the business environment, and requires the organization to revaluate decisions about existing resources. When done right, it can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities. They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept.

Cloud computing architecture Virtualization


Virtualization refers to the abstraction of logical resources away from their underlying physical resources in order to improve agility and flexibility, reduce costs and thus enhance business value. In a virtualized environment, computing environments can be dynamically created, expanded, shrunk or moved as demand varies. Virtualization is therefore extremely well suited to a dynamic cloud infrastructure, because it provides important advantages in sharing, manageability and isolation (that is, multiple users and applications can share Physical resources without affecting one another). Virtualization allows a set of underutilized

15

Physical servers to be consolidated into a smaller number of more fully utilized physical servers, contributing to significant cost savings. There are many forms of virtualization commonly in use in todays IT infrastructures, and virtualization can mean different things to different people, depending on the context. A common interpretation of server virtualization is the mapping of a single physical resource to multiple logical representations or partitions. Logical partitions (LPARs) and virtual machines (VMs) are examples of this definition; IBM pioneered this space in the 1960s. Virtualization technology is not limited to servers, but can also be applied to storage, networking and application, which could be the subject of their own papers

Figure08: The cloud computing adoption model

16

How does server virtualization work?


In most cases, server virtualization is accomplished by the use of a hypervisor to logically assign and separate physical resources. The hypervisor allows a guest operating system, running on the virtual machine, to function as if it were solely in control of the hardware, unaware that other guests are sharing it. Each guest operating system is protected from the others and is thus unaffected by any instability or configuration issues of the others. Today, hypervisors are becoming a ubiquitous virtualization layer on client and server systems. There are two major types of hypervisors: bare-metal and hosted hypervisors.

Architecture
Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over a loose coupling mechanism such as a messaging queue. Elastic provision implies intelligence in the use of tight or loose coupling as applied to mechanisms such as these and others.

Figure09: Cloud computing sample architecture

17

The Intercloud
The Intercloud is an interconnected global "cloud of clouds"and an extension of the Internet "network of networks" on which it is based.

Cloud engineering
Cloud engineering is the application of engineering disciplines to cloud computing. It brings a systematic approach to the high-level concerns of commercialisation, standardisation, and governance in conceiving, developing, operating and maintaining cloud computing systems. It is a multidisciplinary method encompassing contributions from diverse areas such as systems, software, web, performance, information, security, platform, risk, and quality engineering.

Data storage & security Secure Data Storage in Cloud Computing


Cloud storage is a model of networked enterprise storage where data is stored in virtualized pools of storage which are generally hosted by third parties. Hosting companies operate large data centres, and people who require their data to be hosted buy or lease storage capacity from them. The data canter operators, in the background, virtualize the resources according to the requirements of the customer and expose them as storage pools, which the customers can themselves use to store files or data objects. Physically, the resource may span across multiple servers. The safety of the files depends upon the hosting websites.

Figure10: Secure Data Storage in Cloud Computing

18

In cloud data storage system, users store their data in the cloud and no longer possess the data locally. Thus, the correctness and availability of the data files being stored on the distributed cloud servers must be guaranteed. One of the key issues is to effectively detect any unauthorized data modification and corruption, possibly due to server compromise and/or random Byzantine failures. Besides, in the distributed case when such inconsistencies are successfully detected, to find which server the data error lies in is also of great significance, since it can be the first step to fast recover the storage errors. To address these problems, our main scheme for ensuring cloud data storage is presented in this section. The first part of the section is devoted to a review of basic tools from coding theories that are needed in our scheme for file distribution across cloud servers. Then, the homomorphic token is introduced. The token computation function we are considering belongs to a family of universal hash function, chosen to preserve the homomorphic properties, which can be perfectly integrated with the verification of erasure-coded data.

Cloud security
Principles for Securing the Cloud: Secure Identity, Information, Infrastructure Public cloud computing requires a security model that reconciles scalability and multitenancy with the need for trust. As enterprises move their computing environments with their identities, information and infrastructure to the cloud, they must be willing to give up some level of control. To do that, they must be able to trust cloud systems and providers, and verify cloud processes and events. Important building blocks of trust and verification relationships include access control, data security, compliance and event management all security elements well understood by IT departments today, implemented with existing products and technologies, and extendable into the cloud.

Figure11:cloud security

19

Identity security End-to-end identity management, third-party authentication services, and federated identity will become a key element of cloud security. Identity security preserves the integrity and confidentiality of data and applications while making access readily available to appropriate users. Support for these identity management capabilities for both users and infrastructure components will be a major requirement for cloud computing, and identity will have to be managed in ways that build trust. It will require: Strong authentication: Cloud computing must move beyond weak username-and-password authentication if it is going to support the enterprise. This will mean adopting techniques and technologies that are already standard in enterprise IT such as strong authentication (multifactor authentication with one-time password technology), federation within and across enterprises, and risk-based authentication that measures behaviour history, current context and other factors to assess the risk level of a user request. Additional tiering of authentication will be essential to meet security SLAs, and utilizing a risk-based authentication model that is largely transparent to the users will actually reduce the need for broader federation of access controls. More granular authorization: Authorization can be coarse-grained within an enterprise or even a private cloud, but in order to handle sensitive data and compliance requirements, public clouds will need granular authorization capabilities (such as role-based controls and IRM) that can be persistent throughout the cloud infrastructure and the datas lifecycle.

Information security
In the traditional data canter, controls on physical access, access to hardware and software and identity controls all combine to protect the data. In the cloud, that protective barrier that secures infrastructure is diffused. To compensate, security will have to become information centric. The data needs its own security that travels with it and protects it. It will require: Data isolation: In multi-tenancy situations, data must be held securely in order to protect it when multiple customers use shared resources. Virtualization, encryption and access control will be workhorses for enabling varying degrees of separation between corporations, communities of interest and users. In the near future, data isolation will be more important and executable for IAAS, than perhaps for PAAS and SAAS. More granular data security: As the sensitivity of information increases, the granularity of data classification enforcement must increase. In current data center environments, granularity of role-based access control at the level of user groups or business units is acceptable in most cases because the information remains within the control of the enterprise itself. For information in the cloud, sensitive data will require security at the file, field, or even block level to meet the demands of assurance and compliance.

20

Consistent data security: There will be an obvious need for policy-based content protection to meet the enterprise's own needs as well as regulatory policy mandates. For some categories of data, information centric security will necessitate encryption in transit and at rest, as well as management across the cloud and throughout the data lifecycle. Effective data classification: Cloud computing imposes a resource trade-off between high performance and the requirements of increasingly robust security. Data classification is an essential tool for balancing that equation. Enterprises will need to know what data is important and where it is located as prerequisites to making performance cost/benefit decisions, as well as ensuring focus on the most critical areas for data loss prevention procedures. Information rights management: IRM is often treated as a component of identity, a way of setting broad-brush controls on which users have access to which data. But more granular data-centric security requires that policies and control mechanisms on the storage and use of information be associated directly with the information itself. Governance and compliance: A key requirement of corporate information governance and compliance is the creation of management and validation information monitoring and auditing the security state of the information with logging capabilities. Here, not only is it important to document access and denials to data, but to ensure that IT systems are configured to meet security specifications and have not been altered. Expanding retention policies for data policy compliance will also become an essential cloud capability. In essence, cloud computing infrastructures must be able to verify that data is being managed per the applicable local and international regulations (such as PCI and HIPAA) with appropriate controls, log collection and report. Infrastructure security The foundational infrastructure for a cloud must be inherently secure whether it is a private or public cloud or whether the service is SAAS, PAAS or IAAS. It will require: Inherent component-level security: The cloud needs to be architected to be secure, built with inherently secure components, deployed and provisioned securely with strong interfaces to other components, and, finally, supported securely, with vulnerability-assessment and change-management processes that produce management information and service-level assurances that build trust. For these flexibly deployed components, device fingerprinting to ensure secure configuration and state will also be an important security element, just as it is for the data and identities themselves More granular interface security: The points in the system where hand-offs occur user-tonetwork, server-to application require granular security policies and controls that ensure consistency and accountability. Here, either the end-to-end system needs to be proprietary, a de facto standard, or a federation of vendors offering consistently deployed security policies.

21

Resource lifecycle management: The economics of cloud computing are based on multitenancy and the sharing of resources. As a customer's needs and requirements change, a service provider must provision and decommission those resources bandwidth, servers, storage, and security accordingly. This lifecycle process must be managed for accountability in order to build trust.

Business value of cloud computing


Cloud computing is an emerging computing model by which users can gain access to their applications from anywhere, through any connected device. A user-centric interface makes the cloud infrastructure supporting the applications transparent to users. The applications reside in massively scalable data centers where computational resources can be dynamically provisioned and shared to achieve significant economies of scale. Thanks to a strong service management platform, the management costs of adding more IT resources to the cloud can be significantly lower than those associated with alternate infrastructures. The infrastructure management methodology enables IT organizations to manage large numbers of highly virtualized resources as a single large resource. It also allows IT organizations to massively increase their data center resources without significantly increasing the number of people traditionally required to maintain that increase. For organizations currently using traditional infrastructures, a cloud will enable users to consume IT resources in the data center in ways that were never available before. Companies that employ traditional data center management practices know that making IT resources available to an end user can be time-intensive. It involves many steps, such as procuring hardware; finding raised floor space and sufficient power and cooling; allocating administrators to install operating systems, middleware and software; provisioning the network; and securing the environment. Most companies find that this process can take upwards of two to three months. Those IT organizations that are re-provisioning existing hardware resources find that it still takes several weeks to accomplish. A cloud dramatically alleviates this problem by implementing automation, business workflows and resource abstraction that allows a user to browse a catalog of IT services, add them to a shopping cart and submit the order. After an administrator approves the order, the cloud does the rest. This process reduces the time required to make those resources available to the customer from months to minutes. The cloud also provides a user interface that allows both the user and the IT administrator to easily manage the provisioned resources through the life cycle of the service request. After a users resources have been delivered by a cloud, the user can track the order, which typically consists of some number of servers and software, and view the health of those resources; add servers; change the installed software; remove servers; increase or decrease the allocated Processing power, memory or storage; and even start, stop and restart servers. These are selfservice functions that can be performed 24 hours a day and take only minutes to perform.

22

By contrast, in a non-cloud environment, it could take hours or days for someone to have a server restarted or hardware or software configurations changed. The business model of a cloud facilitates more efficient use of existing resources. Clouds can require users to commit to predefined start and end dates for resource requests. This helps IT organizations to more efficiently repurpose resources that often get forgotten or go unused. When users realize they can get resources within minutes of a request, they are less likely to

hoard resources that are otherwise very difficult to acquire. Clouds provide request-driven, dynamic allocation of computing resources for a mix of workloads on a massively scalable, heterogeneous and virtualized infrastructure. The value of a fully automated provisioning process that is security compliant and automatically customized to users needs results in: significantly reduced time to introduce technologies and innovations. Cost savings in labour for designing, procuring and building hardware and software platforms. Cost savings by avoiding human error in the configuration of security, networks and the software provisioning process. Cost elimination through greater use and reuse of existing resources, resulting in better efficiency. The cloud computing model reduces the need for capacity planning at an application level. The user of an application can request resources from the cloud and obtain them in less than an hour. A user who needs more resources can submit another request and obtain more resources within minutes, and in a policy-based system, no interaction is needed at all; resource changes are performed dynamically. Thus, it is far less important to correctly predict the capacity requirements for an application than it is in traditional data centers, and capacity planning is simplified because it is performed only once for the entire data center. Todays IT realities make cloud computing a good fit for meeting the needs of both IT providers (who demand unprecedented flexibility and efficiency, lower costs and complexity and support for varied and huge workloads) and Internet users (who expect availability, function and speed). As technology such as virtualization and corresponding management services like automation, monitoring and capacity planning services become more mature, cloud computing will become more widely used for increasingly diverse and even missioncritical workloads.

23

Conclusion
Cloud computing promises to change the economics of the data center, but before sensitive and regulated data move into the public cloud, issues of security standards and compatibility must be addressed including strong authentication, delegated authorization, key management for encrypted data, data loss protections, and regulatory reporting. All are elements of a secure identity, information and infrastructure model, and are applicable to private and public clouds as well as to IAAS, PAAS and SAAS services. In the development of public and private clouds, enterprises and service providers will need to use these guiding principles to selectively adopt and extend security tools and secure products to build and offer end-to-end trustworthy cloud computing and services. Fortunately, many of these security solutions are largely available today and are being developed further to undertake increasingly seamless cloud functionalities.

24

References
Sosinsky, B., Cloud Computing Bible, Wiley, 2011. Velte, A., Velte, T., Elsenpeter, R., (2010), Cloud Computing: A Practical Approach, McGraw-Hill Osborne . Reese, G., (2009), Cloud Application Architectures: Building Applications and Infrastructure in the Cloud, OReilly, USA . Armbrust, M., et al., 2010, A View of Cloud Computing, ACM, 53(4), pp. 50-58. Papazoglou, M., Traverso, P., Dustdar, S., Leymann, F., 2007, Service-Oriented Computing: State of the Art and Research Challenges, IEEE Computer, 40(11), pp. 38-45. Durkee, D., 2010, Why Cloud Computing Will Never Be Free, IT Professional, 53(5), pp. 62-69. www.wikipedia.org.in Joshi, B.D.J, Takabi, H., Ahn, G., Security and Privacy Challenges in Cloud Computing Environments, IEEE Security & Privacy, Nov/Dec, 2010. Bianco, P., Kotermanski, R., Merson, P., Evaluating a Service-Oriented Architecture, SEIs tech report no. ESC-TR-2007-015. Ali Babar, M., Chauhan, M. A., A Tale of Migration to Cloud Computing for Sharing Experiences and Observations, proceedings of the Software Engineering for Cloud Computing Workshop (SECLOUD), Collocated with ICSE 2011, Hawaii, USA.

25

Você também pode gostar