Você está na página 1de 120

Incorporating

IT Management and Strategy Report

OVUM Butler Group

Planning for Cloud Computing


Understanding the organizational, governance, and cost implications November 2010

Part of the Datamonitor Group

WWW.OVUM.COM

Enterprise IT Knowledge Centre


At the heart of the new service are more than 150 ICT analysts from the former Ovum and Butler teams. They provide deep insight into both vertical and horizontal business technology, delivered through best-in-class research and analysis. To their insights, we add the expertise of Datamonitors 350 business analysts. It is this combination that makes the new Ovum IT service especially valuable to clients: by integrating the three teams, we can offer unique insight into the opportunities and issues facing you and your customers, and dispense invaluable advice to help you create an effective technology strategy a process that we describe as Collaborative Intelligence. Our comprehensive research agenda spans the full IT investment lifecycle. Our analysis and advice help you to create the optimal technology investment portfolio for the organisation, select and implement the appropriate solutions and services, and manage those investments to realise the desired business benefits. Our coverage ranges from insight into industry-specific business processes and analysis of vendor markets, through to radical opinion on disruptive technologies and best-practice IT implementation guides. Here we present thought-leading research and strong examples of Collaborative Intelligence in action, and we look forward to working in partnership with enterprises globally. For more information, please contact Mike James on +44 1482 608380 or mike.james@ovum.com

Research
Laurent Lachal Stephen Mann

Important Notice
We have relied on data and information which we reasonably believe to be up-to-date and correct when preparing this Report, but because it comes from a variety of sources outside of our direct control, we cannot

Acknowledgements
Ian Brown Jens Butler Steve Hodgkinson John Madden Graham Titterington

guarantee that all of it is entirely accurate or up-to-date. This Report is of a general nature and not intended to be specific, customised, or relevant to the requirements of any particular set of circumstances. The interpretations contained in the Report are nonunique and you are responsible for carrying out your own interpretation of the data and information upon which this Report was based. Accordingly, Ovum is not responsible for your use of this Report in any

Published by Ovum Published November 2010 Ovum


All rights reserved. This publication, or any part of it, may not be reproduced or adapted, by any method whatsoever, without prior written Ovum consent. Artwork and layout by Karl Duke, Steve Duke, and Jennifer Swallow

specific circumstances, or for your interpretation of this Report. The interpretation of the data and information in this Report is based on generalised assumptions and by its very nature is not intended to produce accurate or specific results. Accordingly, it is your responsibility to use your own relevant professional skill and judgement to interpret the data and information provided for your own purposes and take appropriate decisions based on such interpretations. Ultimate responsibility for all interpretations of the data, information and commentary in this Report and for decisions based on that data, information and commentary remains with you. Ovum shall not be liable

Part of the Datamonitor Group

for any such interpretations or decisions made by you.

Planning for Cloud Computing


Contents November 2010
Chapter 1: Executive summary 1.1 Executive summary 1.2 Report objectives and structure Chapter 2: Cloud computing will be hybrid 2.1 Summary 2.2 Cloud computing is controversial and important 2.3 The public cloud market is more complex than expected 2.4 Private clouds are catching up with the public cloud Joneses 2.5 Hybrid clouds are the next frontier 2.6 Recommendations Chapter 3: Cloud computing costs in perspective 3.1 Summary 3.2 Enterprises need to scrutinize and adapt to public clouds cost characteristics 3.3 Public cloud costings evolve, but not always as expected or for the better 3.4 Private clouds put public cloud costs in context 3.5 Recommendations Chapter 4: Cloud computing quality of service in perspective 4.1 Summary 4.2 SLAs are key to cloud adoption 4.3 Security is the number-one cloud QoS concern 4.4 Reliability and availability are under increasing scrutiny 4.5 Scalability underpins cloud computings elasticity 4.6 The road to scalable and reliable private clouds requires new thinking and skills 4.7 Recommendations 7 9 14 15 17 17 20 26 28 31 33 35 36 40 47 50 53 55 56 59 66 68 69 71

CONTENTS PLANNING FOR CLOUD COMPUTING

Contents Continued

Chapter 5: Cloud governance: an overview 5.1 Summary 5.2 Cloud governance builds on IT governance 5.3 Cloud governance relies on the same ingredients as IT governance 5.4 Cloud governance, like IT governance, is a work in progress 5.5 ALM governance needs to expand to the cloud 5.6 Cloud governance builds on SOA governance 5.7 Recommendations Chapter 6: Public clouds require IT service management to adapt 6.1 Summary 6.2 Public clouds are changing the IT function 6.3 Public clouds are changing the ITSM landscape 6.4 ITSM technology has a big role to play in managing public clouds 6.5 Recommendations Chapter 7: Glossary Chapter 8: Appendix

73 75 75 80 85 87 91 95 97 99 100 102 107 109 111 115

CONTENTS PLANNING FOR CLOUD COMPUTING

Incorporating

IT Management and Strategy Report

OVUM Butler Group

CHAPTER 1: Executive summary

WWW.OVUM.COM

1.1 Executive summary


Catalyst
Cloud computing promises to tackles two hitherto irreconcilable IT challenges: the need to lower costs and the need to boost innovation. However, it will take a lot of effort from enterprises to actually make it work. Instead of moving their IT mess for less somewhere else, the illprepared will end up with their IT mess spread across a wider area.

Key findings:
Cloud computing is controversial and important. The public cloud market is more complex than expected. Private clouds are catching up with the public cloud Joneses. Hybrid clouds are the next frontier. Enterprises need to scrutinize and adapt to public cloud cost characteristics. Public cloud pricing structures are evolving, but not always as expected or for the better. Private clouds put public cloud costs in context. Service-level agreements (SLAs) are key to cloud adoption. Security is the number-one cloud quality of service (QoS) concern. Reliability and availability are under increasing scrutiny. Scalability underpins cloud computings elasticity. The road to reliable and scalable private clouds requires new thinking and skills. Cloud governance builds on IT governance. Cloud governance delivers the right recipe from a variety of ingredients. Cloud governance, like IT governance, is a work in progress. Application lifecycle management (ALM) governance needs to expand to the cloud. Cloud governance builds on service-oriented architecture (SOA) governance. Public clouds are changing the IT function. Public clouds are changing the IT service management (ITSM) landscape. ITSM technology has a big role to play in managing public clouds.

Ovum view
IT is fashion-driven, and cloud computing is the new black. Currently at the height of its hype, it will suffer a backlash before becoming fully mainstream and established, by which time a new phrase will have captured the imagination of IT pundits.

CHAPTER 1: EXECUTIVE SUMMARY

Some fashions are longer-lasting than others


Cloud computing as a term may have a use-by date, but the technical, operational, and commercial innovations behind it are here to stay. Cloud computing is a real innovation in the logic of how IT is sourced and managed and how services are delivered, and its use will grow steadily over the next 12 months.

Moving from What is cloud computing? to How to make the best of it?
It is no longer a question of whether or not enterprises will use cloud computing: they already are. However, it is still early days for both suppliers and users, many of which have yet to figure out how to take advantage of the various elements of cloud computing. There are plenty of early adopter benefits to be gained, despite the variety of challenges that cloud computing puts in the way.

Get your hands dirty


Taking advantage of the new capabilities of cloud computing requires a change of mindset and the learning of new skills, both of which are better done on the basis of practical experience than academic discussion. Getting hands-on with the cloud is the best way to understand its strengths and weaknesses and its impact.

Cloud computing is a multifaceted phenomenon


Cloud computing is a generic term that refers to several different ways of providing access to shared IT resources that are consumed over a network, mostly but not necessarily the Internet. The term emerged in the second half of 2007 as a category for two emerging types of online offering: infrastructure-as-a-service (IaaS), which delivers raw compute or storage resources, and platform-asa-service (PaaS), which delivers a complete runtime environment for developing or running applications from the likes of Amazon, Google, and Salesforce.com. It quickly expanded to a third type of online offering: software-as-a-service (SaaS), which provides user access to application functionality, and which predates the use of the cloud term. In 200910 it expanded further from a synonym for public clouds where the provider delivers services to a number of third parties (a category that includes IaaS, PaaS, and SaaS) to the notion of private clouds that mix old data-center design and management trends with new public clouds for self-service provisioning and pay-as-you-go (PAYG) billing to deliver and consume IT resources, then hybrid clouds that reflect the way IT suppliers and users mix and match public and private cloud components in a variety of ways to fit their own requirements. For more detailed descriptions of each type of cloud please see the Chapter entitled Cloud computing will be hybrid.

Benefits
Cloud computing
Enterprises are turning to cloud computing for the following reasons: Convenience: for fast procurement (and termination) of on-demand IT services available on a selfservice basis from a variety of networked devices. Convenience drives faster time to market. Adaptation: through the ability to mix and match IT services and increase or decrease their use as required (clouds are elastic). Innovation: cloud computing makes it easier to try new things while taking fewer risks via a PAYG approach known as utility computing. Simplicity: cloud computing short-circuits IT complexity by reducing significant elements of the IT stack to standardized commodity services. Lower costs: from economies of scale based on IT resource pooling coupled with the PAYG approach to using these resources. Cost transparency/awareness: the ability to understand, measure, and manage who is using which IT resources at what cost for billing, planning, and optimization purposes. QoS: enterprises expect public and private cloud IT resources to be more reliable, available, scalable, and secure than traditional ones.

10

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Private clouds
While public clouds are still mostly vendor-pushed rather than demand-pulled, private clouds are both vendor-pushed and demand-pulled in roughly equal measure. Many enterprises are wary of the limitations in terms of security and bandwidth, and the constraints in terms of application design and functionality of the various types of public cloud, but are curious about the possibility of adopting the technologies, designs, and best practices of these clouds in their own data center under the private cloud name. The aim is to deliver similar benefits to public clouds while remaining in control of the IT infrastructure and therefore security and compliance, and squeezing more value out of existing IT investments. Self-service IT frees IT departments from provisioning issues and abstracts service delivery from implementation.

Public clouds
In addition to generic cloud computing benefits, public clouds enable enterprises to: reduce upfront capital expenditures (capex) in favor of more flexible ongoing operating expenditures (opex) via subscription and/or PAYG schemes access IT resources that many did not previously have the means to buy and/or implement focus limited IT resources (hardware, software, budget, people) on a smaller number of projects to narrow the gap between business ambitions and IT capabilities more easily share IT resources both internally and with partners because public clouds provide ready-made resources instead of requiring those sharing to create the resources in the first place expand their reach to new countries and regions and support global operations by recruiting new talent in lower-cost areas within as well as outside their core markets have easier access to Internet resources: public cloud resources have been designed from the start to integrate with other Internet resources.

IaaS public clouds


In addition to generic cloud computing and public cloud benefits, IaaS enables enterprises to: Reuse existing technology, code, and skills, and minimize lock-in. It does this by making it relatively easy to move IT resources (usually packaged as virtual software appliances) on to and away from an IaaS infrastructure. Have a good level of configuration control over the infrastructure resources they use such as central processing unit (CPU) type, memory amount, and disk configuration, and the software (virtual appliance) they run on top of these resources within the constraints of the vendors service catalogue.

PaaS public clouds


In addition to generic cloud computing and public cloud benefits, PaaS: Enables developers to focus solely on the business logic that makes a difference to their employer; for example, the PaaS runtime automatically takes responsibility for scaling cloud applications up and down, depending on usage levels. Supports more granular pricing structures than IaaS. IaaS pricing applies to the running of whole physical and/or virtual server instances (either on or off), while PaaS pricing can delve deeper into the usage requirement of the software that makes use of PaaS resources such as price per request made by the software to the underlying PaaS compute, storage, and network resources.

SaaS public clouds


In addition to generic cloud computing benefits including convenience, innovation, simplicity, and lower costs, and public cloud benefits including reduced Capex and affordable IT resources, SaaS are turnkey offerings that have matured fast enough to represent a credible alternative to many on-premise applications. They are already running at scale, and customers know that they will work before they sign up. Their web interfaces are usually simpler and they are upgraded more often than their on-premise counterparts, with users usually able to turn on the new features at their convenience.

CHAPTER 1: EXECUTIVE SUMMARY

11

Deployment and management considerations


Cloud computing
Enterprises main concerns across both private and public clouds include: Lack of information, resources, and skills: many enterprises are not ready to consume/create cloud services without excessive risk. The right approach to service-centric IT: service-centric IT is not the end but the means that underpins a top-down approach to IT focused on business scenarios and outcomes rather than a bottom-up approach focused on technology and vendor stacks. Governance: cloud computing might be convenient and promote innovation but enterprises are keen to ensure that these benefits are not delivered at the expense of control and vice versa. QoS and reliability, availability, scalability, and security (RASS): the overall expectation is that public clouds have the upper hand when it comes to scalability while private clouds are more trustworthy when it comes to SLAs, security, and reliability. This is not always the case, particularly when it comes to private clouds. Cost transparency: public clouds are not as transparent as most expect when it comes to pricing, and cost comparisons can prove difficult as pricing structures grow increasingly complex. Private clouds still have a long way to achieve cost transparency, more for cultural than technological reasons. Cost transparency across public clouds and between public and private clouds will take a long time to be achieved, if ever. The automation, management, and orchestration related to services offered on public, private, and hybrid clouds.

Private clouds
Apart from generic cloud computing concerns, enterprises main concerns when it comes to private cloud implementation include: Old traditional concerns related to the industrialization, consolidation, and standardization of internal data centers: the emergence of public clouds has increased the pressure on IT departments to deliver on these objectives faster than many are ready to do, to boost utilization, reliability, and flexibility. New generic concerns such as the need to make data centers more energy-efficient, mostly to save money (rather than the planet per se), or to redesign data-center network topology from a three-tier hierarchy to simpler two-tier or even single-tier peer configurations to make it easier to move IT assets rapidly to where they are needed and to boost network performance. New concerns related to the adoption of public cloud-like ways to deliver and consume IT resources: implementing self-service portals and delivering cost transparency is a challenge. It makes more sense to boost the cost-effectiveness of currently owned data centers via virtualization, automation and SOA than to seek to drive costs down by turning to one or more public clouds.

Public clouds
Apart from generic cloud computing concerns, enterprises main concerns when it comes to public cloud implementations include: Security, regulatory compliance, and intellectual property protection are the primary concerns. Data overseas over my dead body remains a common refrain. Reliability and availability worries are growing in importance. The pain inflicted by public cloud outages can be anticipated and lessened with temporary fallback procedures.

12

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

SLAs: enterprises want public cloud service providers to offer stronger as well as end-to-end SLAs to back up their QoS/RASS claims. Vendors are unwilling to do so for reasons that are both technology-related (especially at the level of end-to-end SLAs) and business-related (too tight margins). Migration and long-term costs: enterprises struggle to figure these out as they mostly depend on their specific circumstances. Demand-management to make sure that, since public cloud costs are usage-based, public cloud use is not wasted. Skills. Money is less of an issue with public clouds, with lower upfront costs leading to a democratization of IT. By contrast, this makes skill shortages an even bigger problem. Immature tools for the testing, deployment, scaling, monitoring, migration/movement, and overall lifecycle management of IT resources deployed on public clouds. Interoperability: public cloud offerings favor integration. Portability is mostly limited to data portability. There are many standardization efforts around public clouds, most of them immature. The immature and rapidly evolving nature of the IaaS and PaaS markets, where vendor viability is a bigger issue than technology maturity. The environmental impact of the data center infrastructures that underpin public clouds despite claims by vendors that their state-of-the-art data centers maximize utilization and have as low a carbon footprint as they can have.

IaaS public clouds


Apart from generic cloud computing and public cloud concerns, enterprises main concerns when it comes to IaaS implementation focus on: how to understand and keep track of costs as IaaS cost structures become increasingly complex how to design, manage, and monitor software appliances to deliver satisfactory QoS/RASS levels how to port software appliances from one IaaS offering to another to minimize lock-in, or from internal data centers to IaaS to maximize the reuse of existing software appliances.

PaaS public clouds


Apart from generic cloud computing and public cloud concerns, enterprises main concerns when it comes to PaaS implementation focus on the technology and design constraints imposed by the PaaS environment. These constraints make it impossible to migrate applications from internal IT infrastructures to PaaS and back again (unless the two use the same PaaS platform), or move them between PaaS offerings.

SaaS public clouds


Apart from generic cloud computing and public cloud concerns, enterprises main concerns when it comes to SaaS implementation focus on the depth and breadth of the delivered application functionality as well as the configuration/customization and integration/data-portability facilities. The danger for SaaS applications, as for their on-premise counterparts, is to customize them too much. SaaS customers are also asking for clearer and simpler pricing models (for example, fewer for-a-fee options such as mobile client support) and are concerned about the privacy issue related to vendors collecting usage data, although the collection can, for example, help them to come up with best-practice guidance and benchmarks.

CHAPTER 1: EXECUTIVE SUMMARY

13

1.2 Report objectives and structure


Chapter 1 Executive summary
This chapter provides a high level overview of the entire report and its key findings

Chapter 2 Cloud computing will be hybrid


Cloud computing is all the rage. It will not replace current IT offerings but will add to them in configurations that will increasingly be hybrid. This chapter explains how and why.

Chapter 3 Cloud computing costs in perspective


Cloud computings main promise is that is can cut IT costs, which is easier said than done. This chapter goes into the details of what enterprises need to keep in mind to achieve some of the cost savings they were aiming for when first considering cloud computing.

Chapter 4 Cloud computing QoS in perspective


Enterprises want lower costs but not at the expense of QoS in areas such as RASS. This chapter looks into their QoS/RASS concerns and details whether, and to what extent, cloud service providers are meeting these concerns with better SLAs.

Chapter 5 Cloud governance: an overview


Cloud governance enables enterprises to exploit new opportunities and tackle cloud computing in a systematic manner (instead of the piecemeal approach that currently characterizes most cloudcomputing-related initiatives). This chapter explains why a systematic approach needs to be woven into current ALM and SOA governance efforts as part of IT governances efforts to cross-reference and coordinate governance initiatives.

Chapter 6 Public clouds require IT service management to adapt


As the adoption of public cloud services becomes more prevalent, IT organizations face a variety of new ITSM challenges. This chapter details why they need to ensure not that only existing policies, processes, procedures, and supporting technologies are fit for externally delivered IT services, but also that the IT function remains relevant to the business as an effective part of the IT service delivery chain.

Chapter 7 Glossary
A glossary of commonly used terms has been included.

Chapter 8 Appendix
This chapter contains information about additional reading and the methodology used for this report.

14

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Incorporating

IT Management and Strategy Report

OVUM Butler Group

CHAPTER 2: Cloud computing will be hybrid

WWW.OVUM.COM

2.1 Summary
Catalyst
Cloud computing is emerging as a major disruptive force for both IT vendors and users. It is still very early days, however, for what many rightfully consider to be the most important trend of decade. The next three years will see cloud computing mature rapidly as vendors and enterprises come to grip with the opportunities and challenges that it represents.

Ovum view
Some define cloud computing very narrowly as infrastructure-as-a-service (IaaS) and platform-as-aservice (PaaS) public clouds. Others, Ovum included, include software-as-a-service (SaaS) public and private cloud offerings. A wider perspective helps in understanding one of the key trends in cloud computing: cloud computing will be hybrid. Cloud computing promises to tackles two hitherto irreconcilable IT challenges: the need to lower costs and the need to boost innovation. However, it will take a lot of effort from enterprises to actually make it work. Instead of a nimbler IT firm with its mess for less somewhere else, the ill-prepared will end up with their IT mess spread across a wider area.

Key messages
Cloud computing is controversial and important. The public cloud market is more complex than expected. Private clouds are catching up with the public cloud Joneses. Hybrid clouds are the next frontier.

2.2 Cloud computing is controversial and important


It is early days for cloud computing
A fashion that will not go away
The IT industry is fashion-driven. For most vendors, as well as an increasing number of users, cloud computing is the new black. The hype surrounding it is currently at its climax, as everybody scrambles to out-cloud the competition. Fashions come and go. Two to three years from now, another buzzword will have replaced cloud computing, yet cloud computing itself will not go away. It will become a permanent feature of the IT landscape, among other types of IT consumption model. It will turn into the little black dress of the IT industry: fashionable at first, then a classic.

Important but immature


While there is a growing consensus on the importance of cloud computing, the debate still goes on as to its nature, scope, and impact a clear sign of immaturity. Everybody has their own definition, which causes confusion. The variety of these definitions reflects:

CHAPTER 2: CLOUD COMPUTING WILL BE HYBRID

17

the multiple roots of cloud computing, which originates from developments in technology (growth in computing power and broadband connections, grid computing, virtualization), pricing/licensing models (subscription, pay-as-you-go), business models (Amazon expanding from retail to IT), and so on the twisting of the concept to fit the marketing approaches of an increasing variety of vendors.

Users have still to come to terms with it


Users response to cloud computing, like other technology innovations, oscillates between conflicting personalities, best characterized by the childrens book archetypes of Toad of Toad Hall (Hurray for the shiny new thing!), Eeyore (Ho hum, seen it before. Didnt work last time and wont work now.), and Chicken Little (Alarm! Disaster is upon us!). The market may be curious about cloud computing and willing to talk about it, but it is also mostly confused and skeptical when it comes to actually implementing it.

so have vendors
Vendors are trying to position themselves as market leaders in cloud computing, either as thought leaders, technology innovators, business value providers, or some combination thereof. They want to be seen as shaping the industrys broader cloud agenda and, at the same time, working with customers to show where cloud services can deliver real-life business and IT benefits. They are trying to be smarter, more explicit, and more proactive in how they explain and differentiate their cloud offerings but, as victims of their own hype, are finding it harder to differentiate their cloud computing offerings. Many are still unclear about how, and to what extent, they will make cloud services profitable both on their own and as part of the new cloud computing ecosystems currently forming.

Ovum stands by with a helping hand


This report clarifies and puts in context the variety of elements that are part of the current debate about, and the approaches to, cloud computing. Ovum does not wish to add its own definition to the long list of definitions already drawn. We have no intention of defining boundaries then declaring any particular component, approach, or perspective out of bounds. Instead, we seek to understand what the supply and demand sides of the IT market do in its name and to what extent this makes sense.

Evolution and disruption


Some see cloud computing as a disruptive breakthrough. Others consider it the result of a slow evolution. It is both.

Evolution
Cloud computing builds on the evolution of a variety of IT technologies (broadband connections, virtualization, etc.), designs (service-oriented architectures, multi-tenant applications, etc.), and best practices (e.g. for large-scale enterprise IT management). Many of its characteristics have been around for quite some time. For example, the shared use of excess capacity brings many back to the early days of corporate computing when time-sharing was the norm. Similarly, online shared services have underpinned the Internet itself (e.g. domain name registries) for years and various industries for even longer. Examples include global transaction platforms in the finance industry as well as computer reservation systems and global distribution systems (CRS/GDS) in the travel industry. What matters is not that cloud computing recycles certain ideas and technologies (or that older offerings are rebranded as cloud computing) but how this recycling (and re-badging) impacts the use and evolution of IT.

18

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Disruption
Cloud computing not only recycles ideas and technologies (among other things) but also reflects disruptive IT industry trends such as the transformation of the Internet from a content delivery platform into a software delivery platform. And it is not just about Internet trends. It reflects and accelerates the commoditization of both hardware and software as well as the weaving of software deeper into the fabric of our economies as well as societies. It moves the IT industrys focus from hardware towards software and services, and in doing so blurs the lines between all three. Some equate this restructuring with the ones that occurred in other industries, such as electricity (which moved from local generation to national grids) and automotive (which moved from companies owning and maintaining their own fleet of cars to renting). It impacts the way enterprises relate to: Their IT: it impacts the way they define, create, procure, and consume IT assets. Their IT departments: cloud computing enables (by freeing IT people to focus on key projects) and undermines (by enabling users both within and outside IT departments to bypass established processes) enterprise-wide IT strategies. It redirects IT people to new activities that generate competitive advantage as well as to their local job center via redundancies. One another: among other things, it makes available to SMEs what was formerly only available to larger organizations, and makes it easier for companies to share IT assets. Their IT vendors: cloud computing shifts the risk of investing in, and managing, IT to the vendors. This makes it a slow-moving phenomenon, as relationships evolve much more slowly than technology.

Public, private, and hybrid clouds


Public clouds: SaaS then IaaS and PaaS
The cloud in cloud computing refers to the Internet. In this public space, cloud computing started as SaaS, with Salesforce.com as the flag-bearer of this new software delivery model. It then moved to IaaS, with Amazon as the key pioneer, and then PaaS, with Google and Salesforce.com leading the way. The trendsetters were all relatively new Internet-centric companies rather than software or IT service incumbents (e.g. IBM, Microsoft, Accenture). The incumbents, however, are scrambling to catch up and have the muscle it takes to elbow their way in. No one company has a market-leading position and many vendors are, by now, playing in at least one of the three public cloud domains.

Private clouds
In the past 18 months, an increasing number of vendors and users have been pulling the cloud computing blanket into the private IT space, to the horror of those that define cloud computing as a strictly public phenomenon. What matters, however, is not whether or not private clouds are clouds at all but how, and to what extent, private clouds relate to public ones and impact the way IT infrastructures are currently evolving.

Hybrid clouds
The problem with defining boundaries in the IT industry is that technologies and their packaging are evolving in a way that quickly blurs these boundaries. In the cloud computing space, hybrid offerings are emerging between public and private clouds as well as between traditional IT services offerings (e.g. hosting, outsourcing services) and public cloud ones (see Figure 2.2.1). In fact, the hybrid cloud space is where most cloud computing breakthroughs will happen in the next five years.

CHAPTER 2: CLOUD COMPUTING WILL BE HYBRID

19

Public cloud

Hybrid cloud Private cloud/ internal data center


Figure 2.2.1: The emergence of hybrid clouds

Outsourced IT assets

Source: Ovum

2.3 The public cloud market is more complex than expected


Infrastructure-as-a-service
Basic infrastructure building blocks
IaaS combines compute or storage or both with network resources based on standardized hardware (servers, switches, routers, etc.) and software components (hypervisor, operating system, management, etc.) and associated services (Internet addressing, directory and security services, etc.). They usually require the application components that run on top of IaaS building blocks to be packaged as virtual machines.

Wide variety of offerings


There is a wide variety of IaaS offerings, which reflect the immaturity of the IaaS market. Among other things, these offerings vary in the following ways: Scope: some are limited to one area (e.g. storage with Nivanix) while others offer a broad range of services (e.g. Amazon Web services include compute, storage, database, and message-queuing services, among others). Focus: some are generic (e.g. Amazon EC2), others are specific (e.g. IBMs offering focuses on workloads such as application development and testing). Configuration: some enable users to do some configuration (e.g. central processing unit and memory configuration for servers). Others do not offer configurable options. Technology: some limit themselves to support for Linux and Windows. Others support a wider range of operating systems such as Solaris. Some support custom virtual machine images. Others dont.

20

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

New vendors are emerging, eager to add a specific spin to the IaaS theme. This trend will continue for at least two years; then, as the IaaS market matures and consolidates, it will standardize.

Ecosystem platform (e.g. Amazon)


Amazon has become the yardstick by which every other IaaS offering is measured. IaaS and PaaS providers define their pricing strategy against Amazons. A wide ecosystem is expanding around it, with independent software vendors (ISVs), for example, keen to: add value on top of it (e.g. EngineYards Ruby on Rails platform on top of Amazons compute service) use it as a new software delivery channel (e.g. Oracle partnered with Amazon to deploy Oracle software in preconfigured Amazon virtual machine images and to enable database backups on the Amazon storage service) build a business on top of it.

Platform-as-a-service
Makes it easier to develop and run applications
PaaS adds a new layer of software services on top of those usually found in IaaS to make it easier to develop and/or run (web) applications. While some, but not all, PaaS offerings feature development tools, they all offer run-time services usually found in application servers (e.g. transaction management, process enablement, scalability, user authentication, cache, etc.). For example, the PaaS run-time automatically takes responsibility for scaling cloud applications up and down, depending on usage levels. Developers do not have to hard-code this elasticity into their applications as they would in an IaaS environment.

Wide variety of offerings


As with IaaS, PaaS offerings differ in a variety of ways: not just the portfolio of technologies and services that they offer and the way that they allow users to make use of these, but also in the way that they approach run-time (e.g. reliability), development (e.g. the ability to develop outside, not just inside, of the PaaS environment), and deployment (automation) issues.

Ecosystem platform
Like IaaS, PaaS is: a platform on which new software ecosystems, not just applications, are being built; related to online marketplaces in which ecosystem participants offer their wares; and encouraging developer and user communities, not just marketplaces.

Evolving towards more specialization and abstraction


The PaaS environment both helps and constrains developers in a variety of ways (application language, tools, design, etc.), which makes it difficult to migrate applications from internal IT infrastructures to PaaS and back again. As the PaaS market matures, it will divide between specialized environments that focus on one particular set of technologies (e.g. Force.com from Salesforce.com) and more generic environments that seek to lessen the constraints PaaS imposes on developers and applications (e.g. Appistrys Cloud IQ).

Software-as-a-service
The most developed public cloud market
SaaS combines application-functionality delivery via a web browser with data encryption, transmission, access, and storage services. It can be consumer-centric (e.g. Flickr photo storage, management and sharing offering), enterprise-centric (e.g. Salesforce.coms and Microsofts CRM offering), or both (e.g. Googles Gmail email offering). The SaaS market was the first to develop and, as a result, there are now SaaS alternatives to many, if not most, enterprise and consumer software products. It has expanded rapidly, attracting the attention of software incumbents, which has fueled further expansion and competition. The pioneer in the enterprise sector (ten-year-old Salesforce.com) recently became the first pure-play SaaS vendor to reach $1 billion in revenue. Other SaaS companies such as NetSuite, RightNow Technologies and Workday aim to follow suit, although the next big SaaS vendor is more likely to be an incumbent such as Microsoft.

CHAPTER 2: CLOUD COMPUTING WILL BE HYBRID

21

Part of the cloud computing phenomenon


Some point out that SaaS predates cloud computing and differs from IaaS and PaaS in a variety of ways (e.g. some SaaS offerings are not available on-demand on a pay-as-you-go basis). They prefer to exclude SaaS altogether from their definition of cloud computing, which they limit to IaaS and (in most, but not all, cases) PaaS. Others do include the enterprise-centric elements of SaaS but exclude its consumer-centric ones. Ovum believes that SaaS is part of the cloud computing phenomenon. The objective of this report is to help you understand how SaaS, in its diversity, relates to IaaS and PaaS as well as private clouds.

Wide variety of offerings


SaaS offerings vary not just in terms of the type of application functionality delivered but also the breadth and depth of configurability and customization allowed as well as the back-end infrastructure that underpins this functionality. For example, some have a multi-tenant design, while others use virtualization to achieve a one-to-many deliver model. While IaaS and PaaS are likely to standardize within the next three years, SaaS will continue to spread in various directions for a while longer. Besides the expansion from business applications down into PaaS (e.g. Salesforce.coms Force.com offering) and out into vertical market-specific software, one of the next big trends in this market will be the emergence of new SaaS offerings based on a combination of other SaaS offerings (including consumer and enterprise SaaS combinations, such as Salesforce.coms and Paratures integration with Twitter).

SaaS and process: business-process-as-a-service


Above the standardized SaaS layer exists a budding business process-as-a-service layer where vendors provide a more complete combination of business process, data, and services. An example is the Australia-based Workforce Guardian service that helps customers implement compliant employment processes, including constantly updated template contracts, letters of offer and termination and detailed process guidance specific to employment legislation. An increasing number of end users will also make available some of their internal processes as a service to third parties. Ovum knows of one financial company that has recently done so. The next step, which some call business-as-a-service, is to take over the delivery of the service (e.g. payroll) on an outsourced basis.

SaaS and professional services: Service-with-Software


Besides pure-play SaaS vendors, a number of players offer SaaS and business services in one continuous, single unified service in a variety of markets. A good example is Experian. Everyone thinks of Experian as a credit reference agency, but it is more than that. On top of its data provision service, the company has built a range of SaaS decision-support and marketing tools. This service-with-software approach is also likely to be adopted by an increasing number of end-user companies, especially those in the professional service sector such as recruiting agencies, accountancy/audit firms and legal services providers. Using SaaS allows businesses in these sectors to enhance the value of and reduce the cost of delivering their services. For example, suppose that an audit firm required its customers to use its SaaS accounting software to manage their businesses. At the end of every audit period, the accounting firm would have all the information it needed to perform the audit, already pre-processed into the format required.

Hosted software to SaaS continuum


There is an increasingly complex continuum between application-centric IT services and SaaS. Application-centric IT services include: Hosted dedicated software: vendors own and run the software on their own servers but deploy different instances of the software for each customer. On-premises managed applications: the application resides on the enterprises server or premises but is owned and managed by the vendor. Unlike SaaS offerings, they do not typically include the cost of the license and do not usually rely on a multi-tenant architecture. The latter is key to reducing management costs.

22

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

IaaS, PaaS and SaaS interactions


Not quite a stack
IaaS, PaaS, and SaaS are usually graphically depicted as a stack, with IaaS at the base, PaaS sandwiched in the middle and SaaS on top. This depiction is useful but it leads to a number of misconceptions (e.g. that SaaS depends on PaaS, that IaaS is inferior to PaaS, etc.). SaaS can run on top of either IaaS or PaaS environments, among others (private cloud, outsourced data center, etc.), while IaaS and PaaS environments can run any type of application, not just SaaS ones (as shown in Figure 2.3.1).
Non-SaaS applications Non-SaaS applications

SaaS

SaaS

PaaS (tools runtime) IaaS (compute storage network)


Figure 2.3.1: Not quite a stack

Internal and/or outsourced IT infrastructure

Source: Ovum

PaaS offerings have IaaS components (e.g. Microsofts Azure Service Platform has an IaaS storage component). Some are layered on top of, and independent from, an IaaS foundation. For example, Tibco Silver is available as a pre-built, preconfigured Amazon Machine Image (AMI) and Tibco plans to add support for other third-party IaaS environments, including those supporting VMware technology.

Different markets that vendors are increasingly striding


IaaS, PaaS, and SaaS are different markets that cater to different needs, types of application and target audiences. IaaS and PaaS target IT personnel (developers and IT operations), while SaaS targets both IT personnel (with development tools for developers, for example) and end users (with business applications). No single provider currently offers a comprehensive and complete suite of IaaS, PaaS and SaaS, but we expect big infrastructure vendors to do so within a year, for customers, partners and third parties to mix and match. As a result, the boundaries between IaaS, PaaS, and SaaS are likely to blur.

Hard to differentiate at times


It can already be difficult sometimes to clearly distinguish between IaaS, PaaS, and SaaS. Storage services, for example, can be IaaS (e.g. Amazon S3) or part of a PaaS or SaaS offering. Some are SaaS relying on IaaS (e.g. Zmanda back-up on Amazon S3). Virtual desktop infrastructure software is another good example. Some define it as an IaaS offering because it is infrastructure software, others as a SaaS offering because it delivers desktop functionality online. It is SaaS, not IaaS.

Common characteristics
IaaS, PaaS, and SaaS have various characteristics in common: They are online services delivered by a third party available to any connected devices over the Internet (or other networks) via web-browser graphical user interfaces (GUIs) and to other software mostly through open application programming interfaces (APIs). The latter vary depending on whether the service is IaaS, PaaS, or SaaS (and between offerings in the same area, e.g. IaaS, which creates complexity). They offer standardized services. They are user-centric greenfield approaches to simplifying and standardizing IT to make it much easier to try, buy, and consume IT services. Standardization was initially viewed as a limitation (especially for SaaS), but is increasingly becoming best practice.

CHAPTER 2: CLOUD COMPUTING WILL BE HYBRID

23

They do not require ownership or management of hardware or software (other than the access device). The service provider takes care of hosting, developing, managing and maintaining the services. They are always on. They rely on massive data centers that mix virtualization and automated system management technologies to provide economies of scale (and low cost) as well as a satisfactory quality of service (QoS) in areas such as reliability, availability, scalability, and security (RASS). They use operational support systems for functionality such as management and reporting and business support systems for functionality such as usage metering, service activation and billing. These vary depending on the type of public cloud offering. They are available via a self-service online catalogue or portal that offers transparent pricing information. Besides being self-service, they enable customers to access peer support. They enable vendors to upgrade all users centrally and evolve the service iteratively based on customer feedback as well as actual user behavior data.

Positive feedback loop


There is a positive feedback loop between all three types of public cloud offering, as follows: SaaS users are likely to be more open to experimenting with IaaS and PaaS, and vice versa. The lower cost that IaaS and PaaS provide via economies of scale will lower SaaS cost (as SaaS vendors increasingly build their offering on a PaaS or IaaS foundation, or both), thus strengthening SaaS and expanding the potential of IaaS and PaaS for economies of scale. A growing ecosystem of partnerships is being built that straddle all three public cloud domains (e.g. the alliance between Salesforce.com, Amazon, and Google allowing enterprises to use Force.com PaaS platform as the main platform, but to reach out to Amazon for its IaaS offering for storage and to Google for SaaS document sharing and collaboration). Partnerships are also growing within each market (e.g. the alliance between SaaS providers NetSuite and Salesforce.com to create SuiteCloud Connect).

Convergence
This feedback loop is strengthened by the increasing convergence of SaaS with IaaS and PaaS from a variety of perspectives: Infrastructure: besides building their own data centers or outsourcing them, SaaS ISVs increasingly use IaaS or PaaS clouds, or both, as the foundation for their services, especially since an increasing number of IaaS/PaaS providers (e.g. Fujitsu) target them with SaaS ISV-specific services. Service portfolio: Salesforce.com, for example, has expanded from SaaS to PaaS (Force.com). Google and Microsoft started with SaaS then moved to PaaS. IBM offers both IaaS and SaaS but is likely to expand to PaaS. Ecosystem: cloud computing vendors are increasingly partnering with one another. On-demand availability on a self-service basis: most consumer SaaS services are available on this basis, the way that IaaS and PaaS are. This is less the case for enterprise SaaS (but not all, e.g. Webex), although some enterprise SaaS offerings (mostly from new start-ups) are moving in this direction. Pricing: many consumer SaaS offerings are free to use (financed by advertising), although an increasing number of vendors are moving towards subscription revenues. IaaS and PaaS are unlikely to adopt the same approach, although some vendors will move towards free-to-try or set-up offerings, or both.

24

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Pricing/licensing elasticity: most enterprise SaaS is available via automatically renewed annual subscription contracts that make it relatively easy to increase the number of user seats but not decrease them: the latter is usually only allowed at the end of annual subscriptions. Newcomers (rather than incumbents that will stick to the subscription model) will adopt a more IaaS/PaaS-like usage-based/metered approach with no long-term commitment/contract and the ability to scale usage up and down on demand. Amazon is also pushing ISVs in this direction. Its market presence and established metering and billing processes have resulted in many software offerings being available as SaaS on the Amazon IaaS cloud (using virtualization to achieve a one-to-many delivery model). IBM, for example, is expanding the portfolio of its software available on Amazon EC2, based on a new Processor Value Units pricing model. IaaS and PaaS will move in the opposite direction, towards a subscription model. Some enterprises, especially large ones, prefer the more predictable subscription approach to pricing, which often includes volume discounts.

Wider perspective
IT services impact on public clouds
The boundaries are not only blurring between IaaS, PaaS, and SaaS but also between cloud computing offerings and traditional IT service offerings (e.g. hosting and outsourcing). The flexible pay-as-you-goand-as-much-as-you-consume approach of part of the cloud computing phenomenon could have an obvious effect on services firms bottom lines, and such firms are understandably not ready to abandon traditional businesses in favor of trumpeting cloud-only models for their enterprise customers. For now, they are positioning cloud-based services as an additional delivery model, one that could work in tandem with IT services delivered either in-house or through an external provider. They position themselves closer to cloud computing not only by providing IaaS, PaaS, or SaaS clouds or all three themselves, or mixing public cloud services consumed in a one-to-many model with dedicated one-to-one (on- or off-premise, or both) services, but also by adding public cloud computing characteristics (e.g. fast provisioning) to their traditional services. Customers have started to mix and match traditional IT and public cloud computing offerings. For example, Wordpress.com combines hosted servers and Amazons S3 IaaS storage services to run its blog platform. Cloud computing is transforming the relationship that IT service providers have with their customers. With cloud computing, whose value proposition centers on the delivery of a standardized set of services, the customer adapts to the offering, not the other way around, as was the case until now for IT services. We expect a lot of hybrid solutions stemming from the convergence of these two approaches (with various balance levels between adaptation to customers and customer adaption).

Public sector impact on public clouds


When it comes to IT, the public sector used to be the last to know and implement. This is no longer the case. For a variety of reasons (renewed focus on cutting IT costs, creation of shared services, etc.), this vertical sector has morphed into a much more aggressive user of IT assets, with a particular interest in open-source software and cloud computing. Its influence on the IT market is compounded by the fact that it is not only a user but also the legislator. Government clouds (g-clouds) are emerging not only for internal use (between public sector agencies) but also as tools to promote national interests and the local ICT industry. The objective is to avoid substantial proportions of national IT workloads, along with their tax revenue and employment opportunities, moving offshore. To do so, governments need to bring together the often separate, and contradictory, policy portfolios of industry development and public sector efficiency to drive investment in domestic cloud infrastructure. G-clouds will not be the only vertical market-specific public clouds to emerge. One of the key trends in the cloud computing movement will be the verticalization of public cloud offerings.

CHAPTER 2: CLOUD COMPUTING WILL BE HYBRID

25

2.4 Private clouds are catching up with the public cloud Joneses
Cloud computing shifts to private clouds
In the first 18 months of its existence, cloud computing was a purely public phenomenon. The following 18 months saw a significant shift in focus away from public clouds towards a new concept (that of the private cloud), owing to a powerful mix of vendor push and user pull.

Vendor push
The private cloud is, to a large extent, a rebadging of what data-center-focused hardware, software, and service vendors have been doing under different names (utility computing, autonomic IT, on-demand data center, etc.) for the past ten years. In this market, vendors offer either building blocks or complete solution offerings. For example, IBM offers: The IBM CloudBurst Smart Business System platform, which combines hardware, storage and networking components with virtualization and management software as well as on-site QuickStart implementation (and training) services. It is part of IBMs ongoing efforts to combine network, compute, and storage capabilities into packaged offerings that are easier to manage and consume (with single invoice, installation, and support structure). The Smart Business (private) cloud, which is a service-wrapped hardware and software offering that can leverage existing IT infrastructure or use IBM CloudBurst, or both. It is a natural extension of IBMs data center building and management capabilities. IBM can not only put a private cloud together, but also manage it or leave its management to the customer. Many of IBMs IT service competitors have made similar moves. Not only incumbents like IBM but also cloud computing start-ups (mostly in the PaaS domain) are taking a keen interest in private clouds. They want to bring public cloud technologies into the data center, with the ambition to elbow the incumbents out of (at least parts of) the data center (as well as help customers manage complex, hybrid IT infrastructures that combine public cloud resources with locally managed ones).

User pull
Many users are wary of the QoS/RASS as well as legal/compliance limitations of the various types of public cloud, but are curious about the possibility of adopting the technologies, designs, and best practices of public clouds (e.g. virtualization, standardization and automation) to create elastic pools of IT resources within their firewall to cut costs and make it easier and faster to provision new services. Users caution is strengthened by the unwillingness of some IT departments to adopt public cloud components for fear of losing control (as well as their jobs).

The shift reflects the maturation of cloud computing


Many disagree with the notion of the private cloud
Many consider the notion of the private cloud to be an oxymoron for a variety of reasons, including: Semantics: since the cloud is the Internet, and the Internet is a public infrastructure, clouds cannot, by definition, be private. They are about third parties providing services via the Internet. Economics: cloud computing is about changing the way IT resources are paid for, not just consumed. It enables users to avoid upfront infrastructure costs. They simply rent, which makes it easier for them to move on. Infrastructure: a key feature of cloud computing is scale, and not all private clouds will have sufficient scale to actually be clouds at all.

26

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Marketing hype: vendors seeking to sell new hardware, software and services to enterprise chief information officers (CIOs) are the guilty parties here. Public cloud supporters argue that extending the definition of cloud computing to private clouds stretches the concept so much that it loses all meaning. They also point out that it will lead to users getting increasingly skeptical as the term gets applied to all and sundry. We agree with most of these points, yet we do not believe that arguing over definitions is worth the effort.

It is simply a reflection of cloud computings success


Every IT trend, when it gains momentum, gets redefined as an increasing number of vendors and users manipulate it to fit their needs and expectations. From that point of view, the current market interest in private clouds is a natural evolution of cloud computing. It is the price cloud computing has to pay for becoming mainstream, reflecting the need to relate the success of public clouds to what currently exists inside firewalls. Rather than fight over definitions, wed rather understand how, why and to what extent public and private clouds relate to one another.

New twists and old trends


Old trends
Infrastructure vendors have long been busy promoting the industrialization, consolidation, and standardization of next-generation data centers based on: automated management processes to scale the infrastructure without having to scale the IT workforce with it virtualization technologies to boost utilization and allocate resources dynamically service-oriented architectures (SOAs) to redesign IT software into reusable components and get rid of IT silos in shared IT infrastructures.

New twists
The private cloud embraces automation, virtualization, and SOA while increasing data center focus on the following, among other things: Self-service portals that provide instant on-demand access from a catalogue: the catalogue becomes the key interface that provides product detail and pricing information, enables configuration, manages the ordering process, and offers account management tools. It is also key to make sure that governance policies apply to clouds: IBMs Service Catalog portal, underpinned by Tivoli Service Automation Manager, for example, enables IT departments to limit users choices to a predefined set of services. Providing access not only to internal standardized and reusable services but also to a marketplace of vetted applications and a community of peers: IBM, for example, has an online catalogue for both its SME and large enterprise Smart Business offerings. While the SME online catalogue is generic and geared towards third-party partners (26 in India and 16 in the US as of May 2009), the large enterprise catalogue is workload-specific and, in the case of application development and test workload, mostly limited to IBM software. The SME-focused IBM Smart Market is not just an online application store, but also a marketplace and related portal with Web 2.0 features to compare, rate, and buy applications and services (such as managed security and hosted backup). It is also a community in which clients, experts, and vendors interact. The large enterprise equivalent is likely to evolve along these lines. Usage monitoring and chargeback mechanisms: metering and billing is the next frontier when it comes to private (and public) clouds. Many enterprises are not able to calculate then allocate costs, despite being willing to do so, as they do not have the necessary infrastructure, processes and metrics in place.

CHAPTER 2: CLOUD COMPUTING WILL BE HYBRID

27

Two roads ahead


The public cloud space is neatly segmented into IaaS, PaaS, and SaaS, but the private cloud space is more amorphous. Users are unsure of, or have very different views on, what constitutes a private cloud, which makes any survey about the subject pretty meaningless. Vendors approach them from two perspectives: as a journey and as a shortcut.

Private cloud as a journey


The first approach defines the private cloud as a long, patient evolution that starts with companies understanding what they currently have, then shaping it slowly to achieve a fully dynamic shared infrastructure. From this perspective, the private cloud is the aim of the data center evolution journey. It is from this standpoint that many discussions around cloud computing start. By and large, customers rarely approach services providers to inquire about implementing cloud services within their organization; more often a discussion around security or virtualization may eventually evolve into broader strategies around leveraging the cloud for some cost or operational efficiencies.

Private cloud as a shortcut


The second approach emphasizes the need to take shortcuts along the way by pushing parts of the data center ahead to deliver focused return on investment. From this perspective, the private cloud is the part(s) of the data center that is ahead of the rest. Some shortcuts are technology-centric: they are about purchasing modular hardware and software components tuned up to specific workloads, such as test and development. Some are design-centric: the objective is to turn existing IT assets from siloed resources into pools of resources to be shared (pools that are defined as private clouds). A good example is the blade farms that finance companies are using for processing-intensive applications such as simulation (e.g. Monte Carlo risk analysis). The goal of the redesign of these farms is to make available their compute resources for other applications, not just simulation.

Two approaches to be reconciled from a process, not technology, perspective


What is needed is a way to reconcile the two approaches (private-cloud-as-a-journey and -as-a-shortcut) to understand when, on the road towards a next-generation data center, users should take shortcuts. Unfortunately, most vendors currently emphasize the second approach rather than trying to reconcile the two. They also take an overly technology-centric approach: standardization, virtualization, and automation are only the starting point, not the endgame. The focus should be on people, policies and processes rather than technology, from both an IT-business alignment and an internal IT point of view. For example, data center managers need to consolidate processes, policies, and best practices (logical consolidation) before tackling physical consolidation, not the other way around. They also have a tendency to minimize the challenges of transforming a traditional IT shop into a shiny new private cloud (as they did when the fashion was to redesign the IT shop on a SOA basis). This requires significant investments in the latest data center technologies and the skills to operate them. The problem is that you cant make a silk purse out of a sows ear. In many cases, it is a triumph of hope over experience to allow a CIO to run a big project to build an internal cloud, which is the reason some prefer to just go out and rent an existing cloud or buy one pre-built from IBM or HP.

2.5 Hybrid clouds are the next frontier


From public to private clouds
Public clouds encourage data center transformation
Public clouds encourage enterprises to move to a next-generation data center or private cloud faster than they would have done if public clouds were not present as an example to follow. These enterprises will import some of the technologies and best practices of the massive data centers that underpin public clouds.

28

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Public clouds extend private clouds


As an extension, the public cloud provides a place for private clouds to reach when and where there is a short-term need for extra resources hence the term cloudburst, whereby private clouds burst into public clouds, driven by bursty software whose resource requirements (such as storage and computing) see-saw considerably. The objective is to help clients avoid having to build another data center for peak requirements.

Public clouds complement private clouds


As a complement, the public cloud provides a domain in which enterprises can offload computing activities they do not wish to do themselves (for a variety of reasons such as high costs or insufficient resources), leaving more of their resources free to focus on private-cloud-based IT activities that deliver competitive differentiation.

A variety of hybrids
Not public or private, but both, and then some more
When not debating the nature of a true cloud, participants in the cloud computing discussion like to explain why their preferred type of cloud is better than the other. Some define private clouds as superior to public ones because they offer more control and can run legacy applications, among other things. Others favor public clouds because of their flexibility, lack of upfront capital expenditures (capex), etc. Each type of cloud has its strengths and weaknesses, but that does not make it necessarily better or worse than the other, simply different. The yours is better than mine private-versus-public debate is all the more pointless because the reality of cloud computing is not at the extremes but in the middle, being: not just one or the other but a mix of private and public clouds not just one and the other but a variety of hybrid offerings in between. For example, an increasing number of companies will turn part of their private cloud into a public one, for partners as well as customers. Some of these offerings will be infrastructure (IaaS/PaaS-type) ones; other will be SaaS/business process offerings.

Generic-specific cloud hybrids


Some point out that private clouds are generic (they need to be able to run a variety of workloads), while public clouds are specific (optimized to run a particular type of workload). Others believe it to be the other way around: private clouds can be customized more easily for specific needs, while public clouds are more generic because they are designed around a common denominator approach for satisfying the needs of a broad range of workloads. In reality, private clouds can be whatever they need to be, and the link between public clouds and specific ones is not so clear-cut. Broadly speaking: IaaS clouds are generic. They provide an infrastructure capable of running a variety of workloads. PaaS clouds are halfway houses. They constrain the design of the software that they run in order to optimize performance. As a result, they may not be suitable for certain workloads. SaaS clouds are mostly specific, optimized to deliver a particular type of workload (although the underlying platform of many of them is maturing into an increasingly generic one).

Dedicated shared cloud hybrids


Others point out that private clouds are usually dedicated to a single organization, while public ones are shared.

CHAPTER 2: CLOUD COMPUTING WILL BE HYBRID

29

The two issues are orthogonal to one another: Some private clouds are shared: The notion of a private shared cloud is particularly popular among public sector organizations at the moment. We expect that the success of cloud computing as a concept will lead to more shared private clouds across different organizations, not just business units of the same organization. For example, IT services company Savvis offers a variety of cloud services, available both as a dedicated infrastructure offering devoted exclusively to a single customer and a shared infrastructure service in which pooled resources are provided to multiple customers on an anonymous, on-demand basis and at a lower price point. It is also considering a cloud exclusively for the use of customers in the capital markets, yet shared among them. Some public clouds have parts dedicated to a single user: In August 2009, for example, Amazon Web Services (AWS), the cloud computing subsidiary of Amazon, added a virtual private cloud offering to its portfolio. We expect hybrid public-cloud-based virtual private clouds to become one of the major trends in cloud computing in the next two years.

Hybrid products for hybrid clouds


An increasing number of products are taking advantage of the rise of the hybrid cloud for the following reasons: To straddle both public and private clouds: an increasing number of applications (e.g. Dynamics CRM 4.0) are using the same code base for on-premise and SaaS implementations. To deliver extra functionality using public cloud resources such as storage (e.g. office productivity applications are starting to enable users to save and open documents in public clouds) and computing resources (e.g. anti-virus software moving most of its protection activity into a public cloud while the content remains with the user). For more flexible asset usage: a good example of this is Atlassians Elastic Bamboo continuous integration server. Elastic Bamboo supports frequent integration of code under development with the main body of the release. New code can be automatically compiled and tested as it is developed, accelerating the feedback loop and minimizing the chasing your tail effect that arises when teams of developers periodically apply new code to a constantly changing code base. Continuous integration, however, can create large and unpredictable server workloads as multiple cycles of integration and testing are carried out, particularly in the lead up to a major release. Scaling the hardware to accommodate peak loads is not necessarily justifiable in terms of costs. Elastic Bamboo enables developers to use a mix of on-premise or remote (cloud-based) agents for integration and testing, minimizing the need to bolster local resources to accommodate peak testing loads. The system provides seamless access to the processing resources of Amazons EC2 cloud services.

Complexity lies ahead


No cloud is particularly easy to implement
Public and private cloud supporters have one thing in common: they all claim how easy it is to implement their favorite type of cloud, while pointing out the difficulties in implementing the other type. They are both right and wrong: cloud computing is not that easy to implement. It needs planning and preparation. Besides the current confusion as to what exactly cloud computing is, many enterprises lack the knowledge, skills and metrics (among other things) to figure out what is best for them, hence the increasing number of vendors offering their services to help them do just that.

Cloud computing simplifies but creates new challenges


The cloud may simplify the delivery and provisioning of IT resources but it also has the potential to create a host of new management challenges, especially if a customer wants to access hybrid cloud services (combining services delivered from both private and public clouds). This hybrid approach requires the integration of provisioning, billing, and management capabilities, among others. How services providers deliver service-level management (SLM) and cloud orchestration, either through the development or acquisition of proprietary tools or through partnerships, is poised to emerge as a highly competitive area in the cloud services market.

30

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

The hybrid public/private cloud infrastructure is going to get complex


As hybrid public/private cloud infrastructures emerge, the link between private and public clouds will at first be one-to-one. Then, as private and public clouds mature and enterprises develop their understanding of clouds and abilities to make the best of them, they will use and manage a growing portfolio of federated public and private cloud services. Workloads will fluctuate between the private and public clouds based on seasonal traffic patterns or user behavioral changes.

2.6 Recommendations
Recommendations for enterprises
Be specific
Cloud computing means different things to different people. When talking about it, be specific. Are you talking about IaaS, PaaS, SaaS, public clouds, private clouds, or a mix?

Be pragmatic
It is difficult not to get drawn into the endless debate about what cloud computing is and is not, yet you need to avoid it. Most of it is not only pointless, but tends to push participants to extremes, irrespective of the fact that the reality of cloud computing is somewhere in the middle.

Know yourself
In order to figure out which IT assets to keep and manage within a private cloud, which to trust to a traditional IT service provider, and which to source from the various public cloud solutions on offer, you need to understand what you have and where you want to go.

Focus on the end, not the means


Cloud computing is a means to an end, not the end itself. Focus on the business outcome you have in mind first, then how a particular approach to cloud computing may or should deliver it.

Consider whether you are ready for cloud computing, not just whether cloud computing is ready for you
Adoption is a two-way street. It is not just about whether cloud computing is ready for you: it is, more importantly, about whether or not you are ready for it. The fact is that many enterprises are currently not ready for private or public clouds or any type of hybrid in between. Many enterprises lack the knowledge, skills, and metrics among other things to figure out what is best for them, hence the increasing number of vendors offering their services to help them do just that. Train specialists and adapt systems, processes, and metrics to remain in control while benefiting from the instant provisioning capability of public clouds.

Head up, take stock, and create your own recipes


Get briefed on the latest developments in cloud computing and the implications for your vertical sector. Find out if and where cloud services are already being used in your organization and others, and discuss the strategy and policy implications with customers and stakeholders. Mix and match public and private cloud elements with traditional hosting and outsourcing services to create solutions that fit your short-term and long-term requirements.

Take your time, but start now


The hype surrounding cloud computing makes a lot of enterprises think that if they do not act now they will be left behind. This is partly true for vendors, but enterprise users should take their time. Cloud computing is here to stay. It will become part of an enterprises IT arsenal, and organizations need to get acquainted with it now but adopt it at their own pace.

CHAPTER 2: CLOUD COMPUTING WILL BE HYBRID

31

Look at cloud computing with a fresh eye


Look to what the cloud can offer and where it might best be applied, rather than being so preoccupied with its shortcomings that you fail to recognize its value. Avoid the temptation to impose the full baggage of legacy IT expectations, requirements, and regulations on cloud services. Many of the benefits of cloud computing stem from the fact that it is a commoditized, standardized, take-it-or-leave-it service environment. Successful early adoption of cloud services will require an acceptance of its limitations, astute selection of appropriate opportunities, and a preparedness to solve the new problems that will emerge. The trick is to apply it to areas where in-house IT is failing your enterprise, rather than seeking to apply it to areas where in-house IT is already adequate.

Cloud computing will make IT more complex


As enterprises weave cloud computing into their IT mix, they will face increasing interoperability and management problems. It will take a lot of effort to make this work. Instead of becoming nimbler and reducing the costs for maintaining the status quo, ill-prepared IT organizations will end up with their IT mess spread across a wider area as they venture into the cloud.

Recommendations for vendors


Acknowledge the diversity of the cloud phenomenon
Besides proclaiming the superiority of the cloud offering you deliver (IaaS, PaaS, SaaS or private cloud, or all of the above), acknowledge the diversity of the cloud phenomenon. Point out that the cloud will be hybrid both in itself (with a mix of IaaS, PaaS, SaaS and private clouds) and in relation to more traditional offerings (e.g. a mix of IaaS and hosting).

Weave cloud computing into your strategy


Ignoring this new IT delivery model is not an option. Cloud computing will certainly not be the only mode of IT provision but it will be an important element of the IT market. We do not suggest that every vendor ought to offer cloud services, but every vendor ought to have a strategy of how to accommodate cloud computing and its ramifications.

Focus on why customers should rather than can


Vendors need to shift from helping enterprises move to the cloud to figuring out whether, and to what extent, they should do so. The trick is to look for opportunities to apply the new logic and capabilities of cloud computing in areas where the current in-house or outsourced IT operations (ITO) are inadequate, broken, or too expensive. Public cloud service providers as well as those targeting private clouds should make more effort to frame the debate from a business perspective.

More explicitly define how to get from current data centers to private clouds
Depending on whom you talk to, the private cloud is either the aim of the data center evolution (the next-generation data center) or the part(s) of the data center that is ahead of the rest (with specific workloads running on the part or parts reengineered to act as a cloud). What is needed is a way to reconcile the two approaches (private-cloud-as-a-journey and -as-a-shortcut) to understand when on the road towards next-generation data centers users should take shortcuts. Unfortunately, most vendors currently emphasize the second approach rather than trying to reconcile the two.

32

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Incorporating

IT Management and Strategy Report

OVUM Butler Group

CHAPTER 3: Cloud computing costs in perspective

WWW.OVUM.COM

3.1 Summary
Catalyst
Most cloud strategies have an initial focus on cost reduction or improved cost efficiencies, for two main reasons: the global recession, and the failure of many on-premise IT investments to deliver the cost savings they were supposed to. This is easier said than done, as cloud computing may lead to some cost reductions but introduce complexities of its own for users. This reports looks at costs, pricing, and licensing issues related to the various types of public cloud offering. It contrasts these offerings, including comparison with private clouds.

Ovum view
Cloud computings focus on cost is part of a wider cost scrutiny effort
Software vendors have always offered their wares on the promise of cutting costs or boosting productivity. The lure of cost savings usually turns out to be a triumph of hope over experience, however. Cost savings are now a more explicit and definite objective. Many organizations are strengthening benefit realization processes that increase executive accountability for cost-saving targets. This is flowing through to an increasing interest in cloud computing from early technology adopters.

Public clouds can lower costs, but need to be scrutinized


Public clouds lower upfront and operational costs by shifting investment in and management of IT assets to public cloud providers. However, they are not necessarily less expensive than internal data centers (especially well-managed private clouds) and traditional service offerings. It depends on a variety of criteria, including the pricing structure of public clouds (a structure that is quickly becoming complex) and whether the design of the applications that run on it make the best use of this structure.

Practicality is more important than lower costs


Public clouds may not be cheaper (and, in many cases, they are not), but they are certainly easier to procure than on-premise hardware and software assets. Their flexibility is more appealing to some enterprise customers than their claimed cost-lowering properties. Enterprises are often happy to pay a premium for the convenience that public clouds offer.

IaaS, PaaS, and SaaS are converging from a pricing and licensing perspective
The subscription model used by software-as-a-service (SaaS) as well as open-source vendors disrupted the status quo of the incumbent software vendors. The transparent and flexible usagebased/metered approach of infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) is even more disruptive. The IT industry is moving towards hybrid pricing and licensing approaches to public clouds to cater for as wide a range of needs as possible. Consequently, while SaaS will move towards the pay-as-you-go (PAYG) approach, IaaS and PaaS will move in the other direction towards subscription and less flexible, but cheaper and predictable, schemes.

Private clouds are emerging to make the internal data center more cost-effective
The notion of private clouds has become popular with the objective of boosting data center economics (an endeavor that was hastened by the emergence of public clouds but predates this emergence). For many large enterprises it makes more sense to boost the cost-effectiveness of currently owned data centers (via virtualization, automation, service-oriented architecture, etc.) than to seek to drive costs down by turning to public clouds.

CHAPTER 3: CLOUD COMPUTING COSTS IN PERSPECTIVE

35

Hybrid clouds are the way to go but will be complex


In order to assess, on a project basis, whether to turn to public or private clouds, or both (or to opt for more traditional on-premise software or outsourcing options, or both), enterprise customers need to have the infrastructure in place to understand and manage private as well as public cloud costs. The problem is that this infrastructure is still in the making.

Key messages
Enterprises need to scrutinize and adapt to public cloud cost characteristics. Public cloud pricing structures are evolving, but not always as expected or for the better. Private clouds put public cloud costs in context.

3.2 Enterprises need to scrutinize and adapt to public clouds cost characteristics
Public clouds have attractive cost characteristics
Lower costs via economies of scale
The cost advantage of public cloud services is based on the economies of scale achieved in large (often brand new and state-of-the-art) data centers that underpin the public clouds services. These economies stem from a variety of sources, such as: Volume discounts for resources including facilities, power, bandwidth, hardware, and software. Automation, consolidation, and standardization efforts (for example, provisioning, maintenance, and backup) that minimize costs by enabling a smaller group of people to take care of a larger pool of IT assets. Centralization of resources, which enables, for example, easier and faster upgrades (along with software design choices such as multi-tenancy). The use of commodity hardware and software assets: software is often tweaked or custom-built to make the most of its underlying commodity hardware. Most of this is open source, with the public cloud service vendor taking charge of freely available source code and optimizing it for its own ends, rather than turning to commercially available versions of open-source products. Uncorrelated demand aggregation: as they pool demand from multiple customers, cloud service providers reduce usage variation and maximize use. For example, they can help the retail sector deal with the peak of Christmas shopping and the public sector with tax return season, without users in these sectors having to invest in IT assets that would be underused at other times of the year.

Lower upfront and operational costs


Public clouds: do away with (hardware and software) upfront costs via a subscription and PAYG pricing and licensing model, with the public cloud provider investing in the infrastructure required to deliver the cloud services reduce ongoing operational costs (development, deployment, integration, maintenance, support, upgrade, consolidation, training, administration), as the public cloud provider takes responsibility for the lifecycle of the hardware and software components required to deliver cloud services.

36

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Ongoing costs are much higher than upfront costs, which significantly drives down the total cost of ownership of IT assets delivered via public clouds. Lower upfront and ongoing costs make it easier to correlate the benefits of using the software with the cash flow required to invest in that software. They also result in faster time-to-benefit for each of the projects that public cloud services underpin.

Affordable IT for all


Public clouds provide small and medium-sized enterprises (SMEs) with technologies that they previously could not afford to use. In the IaaS space, for example, these include analytical frameworks based on massively parallel processing technologies such as Amazons Elastic MapReduce service. This redistribution of IT capabilities will radically impact supply chains.

Public clouds turn capex into opex


Public clouds widen the case for opex
The subscription and PAYG approaches of IaaS, PaaS, and SaaS turn IT asset costs from upfront capital expenditures (capex) into ongoing operating expenditures (opex). The move from capex to opex is not limited to cloud computing: for example, open source vendors are also keen supporters of the subscription model. We expect that at least one-third of IT assets will become opex within the next year, partly fueled by the current difficult, albeit improving, global economic situation: when times are tough, cash is king.

A shift should be handled carefully


Be careful of the opinion peddled by some public cloud providers that capex is pass and opex the new black a panacea for IT budgets. Neither accounting strategy is inherently superior. A choice should be made by the chief financial officer (CFO) strictly based on financial considerations, such as estimated return on investment and opportunity costs. CFOs need to bear in mind a variety of criteria, including: The overall position of the company: it makes sense for a cash-strapped start-up to favor opex, but this is less the case for cash-rich companies such as (some) software vendors. The specific vertical sector in which it operates: for example, in the public sector it is much easier to get finance for capex than for opex. Tax-related tactics: on-premise hardware and software capex can be turned into depreciation writeoffs for tax purposes. However, opex cannot be depreciated. The capex-to-opex transition may also be culturally difficult for organizations if they are unwilling to move from a familiar, controllable capex model to an opex one, in which expenditures are partly paid for by a variety of difficult-to-control means such as credit cards.

Public clouds cost-attractiveness is not that straightforward


It depends on the type of public cloud offering
The way public clouds drive down development, deployment, and operational costs depends on the type of public cloud service being used. The higher in the IaaS-PaaS-SaaS stack it is, the greater the number of cost-saving opportunities. IaaS only provides basic infrastructure services. Companies need to pay for the rest, and for the development and maintenance of their applications. PaaS makes it easier to develop applications, but enterprises still need to invest in the resources to create and maintain them. SaaS delivers ready-to-use application functionality, but still requires configuration and customization and, like IaaS and PaaS, integration efforts.

It depends on the market dynamics behind the public cloud offering


The price level and attractiveness of public cloud offerings stem not just from the nature of these offerings, but also from the market dynamics behind them.

CHAPTER 3: CLOUD COMPUTING COSTS IN PERSPECTIVE

37

In the IaaS space, public cloud providers compete with hosting providers that are used to thin margins. As a result, while public cloud offerings start cheaply, they only prove cheaper for specific types of usage for example, handling of peak workloads and ad hoc processing of data such as video encoding. In most cases the hosting offering, although not as flexible, proves a cheaper alternative in the long run. In the SaaS space, public cloud providers compete with on-premise application software vendors that are used to high margins. It is, therefore, easier to pitch their wares below the cost level of their competitors.

Public cloud service costs quickly add up


The cost competitiveness of public cloud services depends not only on the type of service and the industry dynamics that underpin their market segment; it also depends on whether the targeted site is brownfield or greenfield. In brownfield sites, where public cloud services try to elbow out an already established on-premise offering, public cloud service providers compete with annual maintenance fees that are significantly lower than upfront license costs. This makes it more difficult to displace incumbents (a problem also faced by open-source software). Competition for greenfield sites is much more open. In this space, public cloud services: reduce start-up costs but not long-term costs incur not just usage but also migration costs (such as recoding, re-architecturing, integration, new processes, and training), as well as the costs required to reach satisfactory quality-of-service (QoS) levels. It is relatively easy to calculate long-term costs. Deciphering migration costs is more difficult, as it requires knowledge of internal resources, interdependencies, and complexities that many do not have. QoS levels will not necessarily be available, even when customers are prepared to pay for them. Because of long-term costs and, more importantly, migration costs, many enterprises remain wary of public cloud services. This skepticism is supported by past experiences: they have been repeatedly disappointed following return on investment and total cost of ownership claims from on-premise software vendors. Overall, however, and especially in the SaaS market (the most established and costcompetitive of the three public cloud segments), far fewer companies report exceeding their budgets as much as they did with on-premise software, if only because the lower upfront fees make it easier to cut investments when things do not go according to plan.

The transition to public clouds needs to be managed from a cost perspective


Procurement specialists need to adapt and update their skills
Deciding whether public cloud services are worth it depends on a variety of issues, such as the type of public cloud (IaaS, PaaS, or SaaS) and type of project (mission-critical or not). It also requires a holistic approach to cost calculations for example, taking into account data transfer costs that quickly make projects such as online data warehouses costly on an IaaS PAYG scheme. Procurement specialists need to adapt to these new requirements. Overall, they are more used to fixed, front-loaded cost structures than subscriptions and PAYG schemes. For example, without a good grasp of spending, the latter can easily turn from an attractive pricing model into an expensive mess. The problem is exacerbated by the fact that many people, from both vendors and user organizations, focus not only on the ease of provision of cloud computing services but also on the ability of cloud service provision to bypass traditional IT structures. This encourages the creation of a shadow IT infrastructure. Procurement specialists also need to learn the tricks of the public cloud trade. For example, IaaS storage services are sold according to the number of gigabytes used. Vendors use a variety of technologies (compression, de-duplication) to reduce customer footprint, but base their prices on unreduced footprints rather than the space data use.

38

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Enterprises need to set up procurement systems, processes, and metrics that support not just procurement experts but also the average user. Anybody can procure cloud services easily, but this democratization of procurement comes with its share of dangers. Even those companies with tight controls on business card purchasing have had nasty surprises on receipt of their monthly bills. These come as a result of employees using the business credit card to buy public cloud services but failing to understand what they are getting into, or to use and manage the services they bought. For example, it is common for developers to provision themselves with an Amazon virtual machine, use it for awhile, and then forget about it. More importantly, they also forget that while they are not using it, Amazon happily keeps charging for it.

Technology use needs to adapt to public cloud specificity


The way companies use technology also needs to adapt to the specific cost requirements of public clouds. For example, virtualization may be used for both public and private clouds, but its management, from a cost perspective, should be different in the two environments. In a public cloud environment, the focus is on cost control: virtual machines need to be taken down as soon as they are no longer needed to avoid paying unnecessarily for them. In a private cloud environment, the focus is on availability, whereby virtual machines can sit idle, waiting for the next workload to be allocated. Many companies have yet to set up their virtualization scheduling policies (underpinned by a properly set up governance process) to reflect these different requirements.

Architects and developers need to understand the cost implications of their actions
Architects and developers need to understand the impact of IaaS and PaaS pricing structures on the software design choices they make. For example, a pricing structure that combines cheap compute resources with not-so-cheap storage resources should be met with applications that are processingintensive but not overly demanding from a storage point of view (or at least compress data). It should not be the other way around, either, as this would go a long way towards neutralizing the cost savings that public clouds are supposed to deliver. Unfortunately, most enterprises are unaware of this issue. Even if they were, most would be unable to address it, as it would require the re-engineering of their application lifecycle management (ALM) processes (if they have any). Similarly, developers can easily spend too much time fine-tuning their Amazon IaaS virtual machines instead of being more productive elsewhere.

Difficulties will fuel a backlash


Disillusionment is the usual result of hype. However, while hype is mostly fueled by vendors, disillusionment has as much to do with user inadequacies as it does with the misalignment and miscommunication of expectations and vendors inability to deliver on their promises. Expect unchecked proliferation and inadequate management of public cloud computing services to fuel backlash against public clouds claimed cost-effectiveness in 2010.

Practical considerations trump economics when it comes to public clouds


Cost savings are not the most attractive characteristics of public clouds
In difficult times cash is king, even though opex may cost more in the long run than capex. Indeed, companies are not turning to public cloud services because these are necessarily cheaper than alternatives or the solutions they currently use. Instead, many are choosing this option because public cloud services enable them to do what they would not have been in a position to do otherwise, and to do it faster and with fewer risks (because of the low upfront costs).

Speed and ease of procurement are more attractive than lower costs
Public cloud services are provisioned faster and more easily than their on-premise alternatives. PaaS helps to drive development times down. SaaS does the same for application implementation. These characteristics mean faster time to benefit or to market, or both, which helps customers outflank competitors.

CHAPTER 3: CLOUD COMPUTING COSTS IN PERSPECTIVE

39

Lower risks and flexibility are more attractive than lower costs
Public clouds reduce not only the cost and pain, but also the risk, of using IT assets. This is more the case for IaaS and PaaS, with their PAYG approach to pricing and licensing, than for SaaS, which favors the less flexible subscription approach. IaaS and PaaS vendors have a usage-based/metered approach with no long-term commitment or contract and the ability to scale usage up or down on demand. They do not penalize users if their level of usage changes, which allows users to do away with capacity planning. (However, this will require internal controls to ensure that users do not go overboard.) It makes it easier not just to use IaaS and PaaS (as well as some SaaS) services, but to: Meet unpredictable demand: the ability to do this cost-effectively and with an acceptable QoS is, according to some, the key value that public clouds bring to the IT table. Innovate: supporting innovation is not only one of the two main challenges facing IT today (besides cutting costs), but also the most important. Therefore, many enterprises consider this flexibility to be more important than the cost savings enabled by public clouds.

Focus is more attractive than long-term costs


PAYG and subscription approaches may be more expensive in the long term but, in the meantime, enterprises can use the upfront and operational savings to focus on projects that can make a difference at the business or IT level, or both. At the IT level, they avoid being distracted by the operational activities and non-differentiating tasks that have been taken over (often more efficiently) by public cloud service providers. Along with focus, the objective of a move to public clouds may also be part of an effort to de-clutter existing architectures an effort that also includes private clouds and outsourcing options in areas where companies do not have the knowledge, resources, or inclination to improve, redefine, or transform their existing IT assets.

New technology and efficiency are more attractive than lower costs
In October 2009, the Los Angeles City Council awarded Google a $7.25 million, five-year contract to put the councils 30,000 employees on a SaaS email offering. It would have cost the council significantly less to stay on Novells on-premise GroupWise system. However, the council wanted to switch to a new technology and the efficiencies that come with it in areas such as procurement, flexibility, and cost.

3.3 Public cloud costings evolve, but not always as expected or for the better
Evolution will not result in complete commoditization
Commoditization is the name of the game, within limits
Many believe that, driven by their underlying economies of scale, public cloud services are engaged in a race to the bottom that will depress prices and profitability across the IT industry. They are partly right, but will eventually be disappointed if they expect the impact of the trend towards commoditization to be broad and deep. (Commoditization has always been there cloud computing is only one of its newest faces.) Instead, it will be neither, for a variety of reasons including the different cost profiles of the various public cloud segments and a lack of interoperability between public cloud services. On the other hand, some expect that as the public cloud market consolidates around a smaller number of players and remains riddled with interoperability issues, prices will eventually increase. However, we do not think so. There will not be any large-scale consolidation in the short to medium term, and interoperability is likely to improve, as it is key to the ecosystems underpinned by cloud services.

40

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

IaaS will remain the most commoditized


The IaaS market is about three years old. It will remain a low-margin business based on commodity hardware and software. On the other hand, many vendors will raise prices along with QoS in areas such as security, scalability, reliability, and availability. There will be a wide continuum of offerings ranging from those created under a pile them high and sell them cheap approach to those that propose a more enterprise-level experience. At the end of the day, customers will simply get the QoS they pay for.

PaaS is less likely to be commoditized than IaaS


The PaaS market is as young as the IaaS one. In this market, vendors will boost prices based not just on QoS, but also on the breadth, depth, and ease of use of their development and runtime services. As a result, the PaaS market will not experience the same level of commoditization-driven price war as the IaaS market. This is why it will, in the long run, become more attractive to software vendors, while services and telecoms companies familiar with thin profit margins will find it less difficult to cope with the harsh realities of the IaaS market. Amazon has had such an impact on the market that other IaaS vendors, and even PaaS vendors, are defining their pricing in relation to Amazons. The Amazon effect is likely to shrink as the PaaS market matures and users become familiar with PaaS services. On the other hand, powerful newcomers such as Microsoft with its Azure PaaS offering are so keen to gain market share that they will start by prioritizing growth over profitability, and thus keep pricing down.

SaaS competition, rather than commoditization, will keep prices down


The SaaS market is more than ten years old. Its success reflects enterprises desire to assert the commodity status of some types of business application and their rejection of extensive functionality in favor of good enough features. At the same time, open-source software has had the same effect in the infrastructure software market. From a pricing point of view, SaaS is much more open than the IaaS and PaaS markets, as it features a wider range of niche segments, and SaaS vendors are increasingly moving their focus from low cost to added value. As a result, quite a few enterprises have started to complain that SaaS products are not as cheap as vendors claim, especially in the long term. A good example of a vendor sustaining price levels via added value is Salesforce.com. The company has not only kept adding functionality (such as better sales lead management), but has also introduced a PaaS infrastructure (for customization) and generic application capabilities (such as content management and collaboration.) These are backed by a carefully nurtured ecosystem of partners using Salesforce.coms PaaS or marketplace offerings, or both. However, SaaS price rises are kept in check by: The increasing competition between SaaS vendors as well as the incumbent on-premise softwarecentric independent software vendors (ISVs) such as Microsoft entering the SaaS market. The fact that most SaaS vendors, like IaaS and PaaS vendors, prioritize market share growth over profitability which is low (in the lower single digits) compared to on-premise software vendors. This lack of profitability has led to some vendors, such as Lawson, looking to dismiss SaaS as a fad. Other, more serious incumbent software vendors, such as SAP and Microsoft, are moving to SaaS with the stated objective of making their SaaS offerings as profitable as their on-premise offerings. Some (such as Microsoft) are happy to forego profitability to gain market share initially. Others (such as SAP) are more cautious and would prefer profitability parity between their on- and offline offerings from the start. That may prove difficult under the twin competition of those less cautious and a new generation of SaaS vendors even more aggressive on pricing than the first. (This is based on the cost savings delivered by the IaaS and PaaS infrastructures that they rely on, among other things). The new SaaS generation, even less addicted to high margins than the first, will force first-generation SaaS vendors to compare their pricing not only to on-premise offerings, but also to SaaS offerings. This will make it difficult, but not impossible, for software incumbents moving to SaaS and first-generation SaaS vendors to remain disciplined in their pricing. The lack of market influence of this second generation will limit its impact, however, at least for the time being.

CHAPTER 3: CLOUD COMPUTING COSTS IN PERSPECTIVE

41

Overall, vendors will need to carefully balance the drive for market share with the desire for profitability. Thus, such efforts are unlikely to result in price wars in the SaaS market that are as deep as those in the IaaS and PaaS markets. Many vendors will increasingly benefit from the cost savings delivered by IaaS and PaaS infrastructures but, in order to boost profitability, will not pass all of these savings along to customers.

From upfront license to subscription to PAYG, and back


SaaS is part of the software markets move to subscription licensing
In the past ten years, the software market has shifted from large upfront license payments followed by annual maintenance fees to a more flexible subscription licensing. On-premise software vendors have (tentatively) started to do this for a variety of reasons, including exiting the cyclical, erratic revenue pattern generated by perpetual license models, as well as making themselves able to compete with open-source software and SaaS.

IaaS and PaaS move one step further to PAYG


IaaS and PaaS are moving the software market one step further: from subscription to PAYG. Their usage-based approach to pricing does away with long-term commitments and provides the ability to scale usage down or up on demand. It does not penalize users if their level of usage changes, so the user does not need to worry about capacity planning. Some see PAYG as one of the core characteristics of public clouds (and, partly on that basis, declare SaaS out of public cloud bounds since SaaS vendors have so far stuck with subscription licensing and avoided PAYG schemes). This approach is different from what some on-premise software vendors mean by the term PAYG, which involves enabling companies, after they have bought and installed a software product, to unlock additional functionality for an extra fee.

SaaS will take one step towards PAYG


Part of the SaaS market is already using a PAYG approach: the vendors that use Amazons IaaS platform (such as IBM and Oracle) to deliver their software as AMIs (Amazon Machine Instances), complete with new pricing schemes to adapt to the PAYG approach of Amazon. In the next five years, SaaS will evolve further towards PAYG pricing as second-generation SaaS vendors use PAYG flexibility to improve their appeal at the lower end of the market. First-generation incumbents will stick to the subscription model, if only because they do not have the metering and billing capability in place to support a shift to a PAYG approach. The move will be gradual. Instead of expanding from annual subscription contracts to PAYG, many companies could add new flexibility via, for example, quarterly use reviews. Venture capitalists (VCs) will be key to this evolution. Currently, VC executives are not so keen on PAYG, as it is riskier for the vendor than a subscription approach (which itself is riskier than an upfront licensing approach). VC executives attitudes will change, however, as: PAYG-based business models start to make serious money (after all, Amazons IAAS offering grew from nothing to more than $300 million in three years, most in beta form). Start-ups shift the upfront investment risk to IaaS and PaaS providers by building their SaaS offerings on top of IaaS and PaaS offerings.

IaaS and PaaS will take one step backward to subscriptions


While SaaS will move towards the PAYG approach, IaaS and PaaS will move in the other direction. Some enterprises, especially large ones, prefer the more predictable subscription-based approach to pricing, which they tie to volume discounts. Microsofts Azure PaaS platform, for example, is available under PAYG, subscription and volume licensing contracts.

42

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

SaaS will shift from price to licensing flexibility


In March 2010, RightNow announced a new standard subscription agreement that enables customers to: Fix pricing for three years, with the ability to renew for three more years at a capped percentage (up to 15%) of the original price, provided the renewal is at the same capacity level. Reallocate their annual subscription fee (for example, fewer users and more sessions), decrease capacity (which may increase the per-unit net price up to an agreed point), or terminate their subscription once a year, with 90 days advance notice. Customer can add capacity at any time. Use annual capacity pools of usage (transactions or seats) over a 12-month period, with monthly rollover usage to accommodate seasonality and fluctuations without having to pay extra. This could prove particularly attractive in verticals such as retail that have peak periods (Christmas, for example). We expect more first-generation SaaS vendors to follow in RightNows path: towards more flexible licensing terms that develop risk-sharing with customers. We also expect a second generation of SaaS vendors to be even more aggressive than the first from a flexibility and risk-sharing point of view.

Free-to-use offerings will remain limited


Unlike the IaaS and PaaS markets, the part of the SaaS market targeted at consumers is free subsidized by advertising. Some basic IaaS and PaaS offerings are free (For example, Google has adopted a free-to-start approach to PaaS). However, these are few and far between. The same applies to SaaS for the enterprise (although RightNow offers unlimited capacity for 90-day pilots for customers to try before they buy). IaaS and PaaS will remain pay-to-play offerings, at least when it comes to running applications. However, some public cloud providers will boost their competitive advantage by offering a free-to-start approach to (IaaS) virtual machines or (PaaS) application creation and fine-tuning, or both, so users do not start to pay until they are up and running. Should this approach expand, it would help public cloud adoption. In parallel, the free consumer components of SaaS will increasingly seek revenues via additional paidfor services and better ways to deliver advertising. For example, they could move advertising from a push to a pull model, based on identity systems that allow customers to express their tastes and preferences.

Towards greater price transparency within limits


More detailed, easier-to-find pricing
Price transparency is part of the ease of procurement of a PAYG approach: in order to make resources available on demand, service vendors need to provide easy access to pricing information. As a result, IaaS and PaaS pricing and licensing schemes are much easier to find and provide more granular information than SaaS or on-premise software vendors. SaaS vendors: Salesforce.com is more the exception than the rule in this instance. Most SaaS vendors are not very open about their pricing, as customers are often put off not just by the lack of upfront information, but also by the variety of extras to be paid for on top of a basic license (for integration, for example). However, SaaS vendors are likely to open up and simplify as they move towards a PAYG approach to pricing. On-premise software vendors: there is nothing to prevent on-premise software vendors from disclosing more information. (The rise of public cloud offerings may nudge them in this direction via, for example, the use of online calculators to help customers figure out the price of a particular configuration.) If a vendor chooses not to disclose information, it is more likely to be an attempt to boost its bargaining power in negotiations with customers than to be for competitive reasons.

CHAPTER 3: CLOUD COMPUTING COSTS IN PERSPECTIVE

43

All-in-one pricing
Public cloud pricing is more transparent than on-premise software pricing because it includes updates and upgrades as well as technical support. In a perpetual upfront licensing scheme, updates and upgrades are excluded from the upfront price (although they may be included if it is a term license). Updates, upgrades, and technical support are priced either independently or bundled together as a maintenance fee and paid on an annual basis as a percentage of the upfront price (usually a nominal rather than discounted price). Because of lower upfront license revenues and industry consolidation, in the past ten years vendors have: increased upgrade and maintenance fees, as well as charges to transfer ownership in the case of mergers and acquisitions started to limit customers to small updates rather than version upgrades and offer upgrades for a higher fee; for example, the 2005 Microsoft Open Value three-year license payment program included new version rights as well as support and training for 27% of the upfront price limited the technical support included as part of the maintenance fee. These are some of the reasons why enterprise customers have started to turn to public cloud services (among other tactics, such as negotiating maintenance fees down, turning to third-party maintenance providers, or doing away with maintenance altogether). Some public cloud vendors are keen to build on this by adding more value to their bundles, such as free training (for example, SaaS vendor Intacct).

Less discount leeway


Although discounts are available, the discrepancy between actual and official price is much narrower for public cloud services than in the case of upfront software licenses, if only because the upfront amount paid is much smaller due to subscription or PAYG schemes. In contrast, understanding the details of the discount policy that applies to upfront software licenses is difficult. The vendor is in a much better bargaining position than the customer, in that it has an overall view of its customer base. Discounts depend on a variety of criteria such as the size, the negotiating skills and reputation of the prospect, the opportunity for further up- or cross-selling, and whether it is a greenfield site or a deal that elbows out a key competitor. These criteria also apply in the case of subscription deals, but are less relevant for PAYG schemes. One criterion that does not apply is the need for sales representatives to achieve end-of-quarter quotas, since both subscription and PAYG schemes do not result in the end-of-quarter peaks to which upfront licensing fee schemes lead. In addition, prices tend not to fluctuate as much from contract to contract. (Upfront license discounts are usually limited to the initial contract, so customers pay full price for any subsequent or additional deal.)

Increasingly complex
Customers have always complained that upfront licensing schemes are overly complex, based on a variety of units, including: named, concurrent, and power users (or seats) server or central processing unit (CPU) or, in the case of an appliance license, hardware boxes, boards, blades, and so on component or component stack/suite, site, and so on all-you-can-eat enterprise volume licenses. These schemes are complicated further by technological developments such as the increasing distribution, componentization, and virtualization of software applications, as well as the rise of multicore and grid/cluster computing. Vendors say that this complexity reflects customers wide-ranging requirements.

44

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Public cloud services only partially do away with this complexity. Subscription approaches to pricing and licensing are less complex than upfront perpetual licenses (which are mostly limited to units of users, seats, or sites). PAYG approaches started out simply, but are becoming increasingly complex. PAYG pricing may be available for all to see, compare, and contrast, but users have to compare granular structures (such as the price for a CPU per hour, data transfer, or storage space), which can differ significantly. In the process, they need to decipher what lurks behind the various terminologies adopted. In addition, in the IaaS space (with the PaaS and then SaaS markets likely to follow), public cloud service providers are expanding from PAYG pricing schemes in a variety of directions. Amazon Elastic Compute Cloud (EC2) is a good example. It started with a straightforward on-demand PAYG scheme and then, in 2009, added: Reserved pricing (March), which enables customers to reserve compute resources for important applications for one- or three-year terms. This tiered commitment pricing scheme offers discounts of 3050%. Spot pricing (November), which allows customers to bid for unused on-demand or reserved compute resources and keep using them until Amazon needs them back or the customer is outbid. Customers pay the spot price rather than the maximum bid price that they specify. This scheme is relevant to workloads that can deal with lower resources such as batch image conversion, video rendering, and financial analysis. Other pricing schemes include: a configuration approach, where the customer starts with a basic configuration for an entry fee and then pays extra to add various services an upfront or joining fee to build and configure the system, and then yearly, quarterly, or monthly recurring payments a schedule-based approach, whereby enterprises commit to a fixed number of units based on expected requirements buy-back, which enables public cloud providers to buy back some of the capacity of a virtual private cloud if it is unused this approach is similar to the smart-grid/metering work in the utilities space vendor load balancing, in which the public cloud provider adds capacity based on resources (for example, if the CPU is more than 80% busy for five minutes, the vendor adds 30% more capacity) or virtual machines (the vendor adds virtual machines to run any extra workload). As a result, while pricing is transparent, cost-benchmarking exercises quickly become challenging (but not impossible).

From pay-as-you-go to pay-as-you-grow


Whatever their approach, in order to compete with on-premise software vendors, it is not enough for public cloud service providers to simply tout lower costs. They should also consider moving pricing away from internal vendor criteria (namely internal costs plus margin) and towards pricing based on customer benefits, and away from IT-specific units and towards more business-related ones. Combining a business model based on lower profit margins than those of on-premise vendors with a pricing strategy that focuses on business issues could provide public cloud service vendors with a key advantage over on-premise vendors. Some will focus on transaction-based models, such as pay per lead in CRM, per pay slip in payroll, or per insurance claim processed. This approach has been in place for many years in global transaction platforms in the finance industry, as well as computer reservation systems and global distribution systems (CRS/GDS) in the travel industry. Users of these precursors to public cloud services only pay if they generate revenues from the system for example, if they carry out a financial transaction or receive a booking. Some SaaS vendors, such as Taleo, have already moved in that direction. Not all user companies will choose to go down this route, however, as it involves an amount of planning effort that not everyone is willing or able to carry out, and generates a level of uncertainty that not everyone is prepared to face.

CHAPTER 3: CLOUD COMPUTING COSTS IN PERSPECTIVE

45

A smaller group will focus on success-based models for example, payment based on achieving certain productivity, employment, and/or revenue growth objectives. Most public cloud service providers shy away from this approach, as it is much more dependent on the ability of the user company than that of the underlying software used by the firm. On the other hand, the ability to demonstrate an understanding of the needs of a specific customer or industry sector from a pricing and licensing point of view could yield benefits and strengthen the customer relationship based on a partnership approach to pricing. It would also make it easier to build a business case in terms that business executives, as opposed to IT specialists, would understand.

Public cloud price diversification will be limited by IT systems


Vendors are still feeling their way
Public cloud providers are still wondering which pricing approach is best not just to survive, but to succeed. They will continue to try a variety of strategies that will limit the impact of cost transparency by introducing new complexity. This diversification reflects a maturation process that is shared with other industries. Telephony services, for example, can be priced based on usage rates, pre-pay models, or a flat rate. Diversification enables customers to pick and choose the scheme that suits them best, both overall and on a project-by-project basis. Customers will take advantage of this diversity, although they will also pressure vendors into standardizing part of their pricing schemes.

Focusing on operational efficiencies to bring costs down


In order to stave off competition, boost profitability, and remain flexible as well as inventive in their pricing structures, public cloud vendors need to invest in IT systems that enable them to squeeze as much efficiency out of their hardware and software infrastructure as possible. To a lesser extent, they are driving down marketing and sales costs via online marketplaces. This is currently their number-one strategy. It is focused on driving costs down and, to a lesser extent, profitability up. This focus on operating costs is important, but not enough for success.

Better pricing management systems


Pricing management is more important than cost management. According to a 2003 McKinsey report titled The power of pricing, the impact of a 1% price increase on operating profits is 50% greater than that of a 1% decrease in variable costs, and more than 300% greater than that of a 1% increase in sales. Public cloud vendors need to start focusing on this area much more. As competition intensifies, they need to develop an objective, structured, consistent, and flexible approach to pricing based on the use of pricing management and optimization systems (a mix of business application, business intelligence, and middleware components).

Improved usage metering and billing systems


Public cloud vendors need to back up pricing flexibility with usage metering and billing systems that can cope with their requirements. As their market evolves and competition becomes more intense, the ability to create new tariffs, packaging, or service bundles will become a key differentiator. Customers are already starting to move from one provider to another, partly because of more flexible billing facilities. The problem is that, as exemplified by the telecoms and utilities markets, public cloud providers will soon experience increasing pain as operational systems and processes struggle to support the business. Help is at hand, however. Billing is a well-trodden, mature market, with old-timers such as SPL WorldGroup (now owned by Oracle) and Convergys (including through the acquisition of Geneva Technologies). However, newcomers such as SaaS providers Zuora and Aria Systems are redefining the category, just as Salesforce.com reinvigorated and redefined the CRM market. Many providers have created their own custom systems, however, which could prove to be a mistake. Public cloud requirements, both functional and non-functional, are complex and costly to build for anything other than the most naive of use cases. Those who begin to build software with only these trivial use cases in mind will soon find their sales and marketing colleagues developing more complex scenarios, and the development projects will be stretched to breaking point.

46

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

In the IaaS and PaaS markets, public cloud service vendors usage metering and billing systems need to be able to cope with not only their own requirements, but also those of the third-party developers and SaaS vendors that use their platforms. IaaS and PaaS service providers are particularly keen to make developers and SaaS vendors lives as easy as possible by offering them, among other things, the ability to sell and bill applications via an online marketplace. These capabilities need to become more flexible. For example, Amazon cannot currently support multiple vendors per virtual instance because it would be difficult to split revenues from each virtual instance between Amazon and a variety of vendors. Third-party developers and SaaS vendors using the usage metering and billing systems of IaaS and PaaS service providers are also facing a growing challenge. The more diversified the portfolio of public cloud services they use, the more complicated it will be to integrate their own usage metering and billing systems with the usage metering and billing systems of these public cloud services. Currently, few are alert to this challenge. There will be tears along the way for those that do not start to design their systems to cope with heterogeneity soon.

3.4 Private clouds put public cloud costs in context


Private cloud: a cost-centric notion
Boosting data center economics
Most public clouds put many internal data centers to shame when it comes to cost (and QoS). This is not just because of economies of scale but also, and more importantly, because many data centers are underused and badly managed. Vendors are marketing private clouds as the way to make data centers cost-effective. Many users have embraced the notion to do just that. IT departments are coming under increasing pressure to compete with the prices and flexibility of public cloud offerings, as public cloud services are increasingly used alongside private cloud services.

Putting public cloud cost-effectiveness in context


The cost-effectiveness of public clouds depends on a variety of elements, such as the type and size of the application and the skill of user. In many situations, they are not as attractive as public cloud providers claim as an alternative to internal IT infrastructures all the more so if data centers become more cost-effective.

Private clouds: the larger, the more cost-attractive


McKinsey is right: pull your data center together first
An April 2009 a McKinsey report entitled Clearing the air on cloud computing took a detailed look at the cost of Amazons IaaS offering and concluded that it was not cost-effective for large enterprises. It rightfully warned CIOs not to be distracted by public cloud cost gains and to keep focusing on private cloud. Overall, the larger the data center, the less attractive public clouds are cost-wise. It makes more sense to boost the cost-effectiveness of currently owned data centers (via virtualization, automation, or service-oriented architecture) than to seek to drive costs down by turning to one or more public clouds. The potential improvement in cost-effectiveness in most data centers is such that by getting their data center acts together, enterprises could achieve cost gains that outweigh the gains via public clouds. The problem is that before a company can resource the construction of a private cloud, it has to escape from the logic of: 90% of the budget goes on maintaining the legacy.

Put your efforts in context


What applies to a large company in general does not necessarily apply to: Some of its specific projects: depending on the circumstances, public cloud offerings present attractive alternatives to private clouds.

CHAPTER 3: CLOUD COMPUTING COSTS IN PERSPECTIVE

47

Smaller companies: the McKinsey report acknowledged that IaaS may not be the best option for large enterprises, but it can be cost-effective for SMEs as well as start-ups. (However, it still depends on the types of application used as well as breadth and depth of IT assets and expertise.) Indeed, VCs have begun to require start-ups to use IaaS and PaaS public clouds in order to take advantage of their flexible pay-as-you-succeed-not-if-you-fail approach to pricing and licensing. The whole spectrum of the public cloud services: the McKinsey report focused on IaaS. The other components of the public cloud stack (PaaS and SaaS) can be more cost-effective because they deliver more services or require fewer resources and efforts. McKinsey itself acknowledges the costeffectiveness of SaaS over on-premise solutions. McKinsey did a thorough job. However, its approach has shortcomings. An all-or-nothing perspective: the cloud will not be either public or private, but a mix of the two. Companies are unlikely to give up on their existing data center investments because of public clouds, and they should not shy away from public clouds because of these investments. The key issue is to understand, for each company, how and the extent to which public and private clouds complement one another (and other options such as traditional hosting and outsourcing), rather than pitting one against the other. A focus on hard costs rather than soft costs: Some data centers cannot keep up with internal demand, meet requirements on time and on budget, deliver a satisfactory QoS, or retain key skilled workers. These shortcomings have a cost. From that perspective, public clouds are more attractive than it seems from a private cloud angle. In addition, the more dysfunctional the data center, the more likely both developers and end users are to turn to public cloud offerings (creating more dysfunction along the way).

Cost management is getting more complicated


On-premise software vendors fight public clouds with new financing and pricing options
Comparing public cloud service options from a cost point of view is not as straightforward as many believe. Comparisons between public cloud options and more traditional IT consumption models are even more complex all the more since the vendors behind these models are endeavoring to provide customers with the same flexibility as offered by public cloud. They do so via new financing, pricing, and licensing options. The financing options make it relatively easily, for example, to turn an upfront capex investment into opex via a leasing financial structure. Examples of vendors that offer new pricing and licensing options include: Sword Ciboodle: this provider of customer interaction and contact center solutions has not only begun to look at the SaaS model (In South Africa, rather than its main markets), but also changed its pricing structure to adapt to the success of SaaS solutions in its core market. It now provides customers with the option to rent on-premise software or pay with a usage-, value-, or transaction-based model. SAP: towards the end of 2009, the company started to offer to those that spend more than about $2 million annually the choice of paying for a subscription quarterly or monthly instead of upfront. Microsoft: the company will offer its forthcoming online version of Office 2010 suite on a PAYG basis (alongside more traditional licensing schemes). IBM: like many other ISVs offering their wares as Amazon virtual machine images, the company has adapted to Amazons PAYG approach with its new Processor Value Unit pricing model. Under pressure from IaaS and PaaS offerings, the hosting and IT service industry is also moving towards PAYG-like approaches.

48

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

More needs to be accomplished, and vendors will find it hard to cope


While offering new financing options is attractive for software vendors because it has a limited impact on their businesses (if they can afford it), the transition from an upfront licensing model to a recurrent revenue model is more difficult. Many software companies (unless these companies already draw a big portion of their revenues from recurring maintenance fees) find it hard to cope with the short-term revenue decrease that results from this transition. They have no choice, however, as public cloud providers fight back with lower, more flexible options, such as Amazons reserved and spot pricing schemes. This is not the first time for on-premise vendors to show flexibility, as exemplified by CAs now-defunct FlexSelect scheme, which was launched in 2000 based on monthly payments and business metrics agreed by vendor and users. The rise of public as well as private clouds will make them more willing to compromise. In the meantime, however, customers grow frustrated with the inability of many licensing schemes to allow for the transfer of IT assets from internal data centers and private clouds to public clouds. (In the same way, many could not cope with moving IT assets from a physical to a virtualized infrastructure at first a problem that has yet to be resolved and is replicated in the IaaS space, since most IaaS offerings use virtualization).

Hybrid clouds make the situation even more complicated


Enterprises increasingly mix public and private cloud components and traditional IT service offerings to create solutions that fit their specific short- and long-term requirements. They mix and match: Private and shared private clouds to collaborate with partners on common goals. Public and private clouds, applying public clouds to workloads that have unpredictable spikes in use or applications that are only occasionally used. Public clouds and traditional hosting and outsourcing service offerings; for example, hosted offerings are usually cheaper for static websites than the Amazon IaaS service. On the other hand, for uses such as application testing, in which a small number of servers are required for a few weeks, a few hours per day, Amazon IaaS is the better solution. Public cloud offerings (IaaS, PaaS, and SaaS), based on their respective cost-effectiveness. In order to mix and match a variety of IT components and services to ensure cost-effectiveness, enterprises need to improve their knowledge of how much a given asset costs, and how best to approach this cost.

Enterprise users need better systems to manage both public and private clouds
In the same way that public cloud service providers need to improve their IT systems to boost pricing, metering, and billing management, enterprise users need to put in place the infrastructure that enables them to: understand and manage internal costs (as well as QoS), based on a mix of tools such as IT resources, assets, service planning and portfolio management solutions, chargeback tools, and metering and billing systems deal with public cloud billing and metering systems: the more they mix and match different public clouds, the more diverse will be the public cloud metering and billing systems with which they will have to cope. Better systems would enable users to track a variety of measures (such as the price of a gigabyte of storage), whether the service is a private or public cloud. It would enable CIOs to demonstrate the cost of the services they provide and put public cloud costs in context in response to CEOs increasing demands.

CHAPTER 3: CLOUD COMPUTING COSTS IN PERSPECTIVE

49

An asset lifecycle approach is key


Enterprises need to manage IT asset lifecycles to understand when and where they should make new infrastructure and application investments, using the most appropriate financing options. As part of a proactive IT asset lifecycle management effort, they need to consider: Moving some of their IT assets from capex to opex, which means identifying which IT assets are the best candidates for sale and leaseback or for transfer to public clouds. Planning for the end-of-life phase of all assets, taking into account economic and physical as well as environmental issues. With IT assets often used beyond their true economic life, operating costs, including the cost of asset disposal, can easily outweigh acquisition costs. At this point, public cloud offerings can be considered as an alternative to on-premise. Using project financing to bring forward time-to-profit and improve benefit alignment (thus reducing the attractiveness of public cloud services in this domain). They can offset finance costs against the extra discounting that vendors are willing to offer in tough economic conditions.

The objective of automated cloud auctions is far away


Many believe they will one day be able to send intelligent agents scurrying across internal and external networks to find the best fit, cost-wise, for particular applications. These agents could be set up to organize auctions whereby private, shared, and public clouds, as well as any hybrid not to mention traditional hosting and outsourcing service providers would contend for the job of either delivering the entire application (the SaaS way) or providing some (IaaS) or all (PaaS) of its infrastructure. This is unlikely to happen anytime soon. For example, although Amazon introduced Spot Instances at the end of 2009, these are based on a price that Amazon, not the market, specifies. They have little to do with real auctions, which would mean Amazon giving up control over its pricing to market forces something we do not expect vendors to be willing to do. In the meantime, a new generation of public cloud brokers is emerging. It is expanding from gateway (access) services to brokerage services including volume licensing for customers.

3.5 Recommendations
Recommendations for enterprises
Analyze your figures
In theory, public clouds are cheaper than internal offerings and other IT service solutions. However, in practice, depending on the project, this is not always true. Do not assume anything, and analyze your figures carefully.

Opex is not a panacea


The PAYG and subscription approaches of the three types of public cloud (IaaS, PaaS, and SaaS) turn IT asset costs from upfront capex into ongoing opex. Many prefer opex as they wish to conserve cash, a basic requirement in a difficult economic situation. However, opex is not a panacea, and the capexversus-opex investment decisions need to be made based on a variety of criteria that have little to do with cloud computing. (For example, in the public sector capex is more easily obtainable than opex.)

Train specialists and adapt systems, processes, and metrics


Adapt your approach to procurement to remain in control while benefiting from the instant provisioning capability of public clouds. Train architects and developers to understand the cost implication of their public cloud actions. Establish cost management systems, metrics, and processes to understand both internal and external cloud computing costs.

50

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Recommendations for vendors


Get ready: train specialists and adapt systems, processes, and metrics
As part of their efforts to cut costs and see off competition, cloud service providers should not only boost operational efficiencies but also develop highly flexible pricing management and usage metering and billing systems. The move to a subscription and/or PAYG licensing scheme requires a thorough redefinition of pricing, packaging, and sales incentive plans as well as an upgrade of the systems that underpin them all.

Mix and match pricing and licensing approaches


Depending on company size and project type, companies prefer both PAYG flexibility and subscription predictability. Public cloud service providers should mix these two approaches and consider moving beyond them towards pricing based on business outcomes (such as pay per lead in CRM and per pay slip in payroll). Combining a business model based on lower profit margins than those of on-premise vendors with a pricing strategy that focuses on business issues could provide public cloud service vendors with a key advantage over on-premise suppliers. A good first step for traditional software vendors is to adapt licensing and pricing schemes to make it easy for customers to transfer on-premise software assets to the cloud or substitute online software for on-premise software.

Compare your organization to its peers rather than on-premise alternatives


Public cloud service providers like to compare themselves to on-premise alternatives. However, they will increasingly have to position themselves against one another. This plays into the need for greater transparency in pricing models and offerings.

Use free-to-start offerings to boost adoption


Some public cloud providers will define part of their competitive advantage via free-to-start approaches to (IaaS) virtual machines or (PaaS) application creation and fine tuning, or both so users only start to pay when they are up and running. Wider use of this approach would boost market adoption, especially at the low end of the enterprise market.

CHAPTER 3: CLOUD COMPUTING COSTS IN PERSPECTIVE

51

Incorporating

IT Management and Strategy Report

OVUM Butler Group

CHAPTER 4: Cloud computing quality of service in perspective

WWW.OVUM.COM

4.1 Summary
Catalyst
Public cloud service providers will rise and fall on their ability to execute and deliver satisfactory quality of service (QoS) in areas such as reliability, availability, scalability, and security (RASS). Many enterprise users are wary of public clouds QoS and RASS limitations but curious about the possibility of adopting the technologies, designs, and best practices of public clouds for their own data centers (rebadged as private clouds). The situation is evolving rapidly with both public and private clouds, as vendors and users are struggling to keep up with new developments.

Ovum view
Public clouds QoS is under close scrutiny
Public cloud providers claim superiority over on-premise IT infrastructures on two fronts: cost and QoS. On-premise IT supporters counter-attack at both levels, but the brunt of their offensive focuses on QoS with a mix of valid criticisms and hyped assertions aimed at generating fear, uncertainty, and doubt (FUD). Public cloud providers should expect the criticism (and FUD) to continue at its current intensity.

Public cloud SLAs need to improve


The market is slow to trust that public cloud service providers will deliver on their RASS promises all the more since the service-level agreements (SLAs) that back these promises are skewed in favor of the providers. Market trust will build up, however, as SLAs, backed by certification schemes, improve. Those with QoS requirement levels for which public clouds cannot cater will keep to private clouds (including shared or virtual private clouds, or both).

Security is the number-one QoS issue


Security concerns are the most important obstacle to public cloud adoption, along with concerns related to regulatory compliance and data governance. However, public cloud risk can be managed like any other risk. It requires vendors, users, auditors, and governments to cooperate a process that has started but will prove slow-moving.

Private clouds will find it hard to keep up with the public cloud Joneses
Enterprises QoS expectations are rising. The rise affects both private and public clouds. The more demanding enterprises become with public cloud SLAs and QoS at all (RASS) levels, the more likely the same enterprises will be to make the same demands of their IT departments. Considering the status of many internal data centers, public cloud providers may find it easier to meet these demands than IT departments.

Private clouds will converge with public ones


In order to deliver satisfactory QoS and SLAs, public cloud providers have made technology and design choices that enterprises may not be able or willing to make. Both sides are on a convergence path, however. On one hand, enterprises will take a step forward and, for example, rethink how best to design new applications for scalability. On the other hand, public clouds will take two steps back. Many have chosen designs and technologies that have yet to become mainstream. These will evolve towards approaches that are more familiar to developers, or will mask the exotic elements of their approaches, especially when it comes to platform-as-a-service (PaaS) offerings.

CHAPTER 4: CLOUD COMPUTING QUALITY OF SERVICE IN PERSPECTIVE

55

Key messages
SLAs are key to cloud adoption. Security is the number-one cloud QoS concern. Reliability and availability are under increasing scrutiny. Scalability underpins cloud computings elasticity. The road to reliable and scalable private clouds requires new thinking and skills.

4.2 SLAs are key to cloud adoption


Enterprises are adapting to public clouds QoS
QoS is a priority for both vendors and users
Enterprises moving to cloud services want to ensure QoS levels regardless of whether the service is delivered from a private (internal) or public (external) cloud. In response, public cloud providers position their offerings based on the quality (and relevance) of their service as much as on their pricecompetitiveness. They claim that public cloud services QoS is better than that of many, if not most, internal data centers (which is probably true), and use QoS as the cornerstone of their enterpriseready image.

Users attitudes are evolving rapidly, within limits


Many enterprises are still unconvinced. For example, in early 2010, the first Risk/Reward Barometer survey of the Information Systems Audit and Control Association (ISACA), based on responses from 1,809 US IT professionals, found that 45% thought cloud computing (a phrase used as a synonym for public clouds) risks outweighed the benefits (such as lower cost and provision flexibility). On the other hand, 38% of respondents believed that the risks and benefits of cloud computing were appropriately balanced, and 17% agreed that the benefits achieved with cloud computing outweighed the risks. These figures are impressive, considering the immaturity of the cloud computing phenomenon. Much fewer enterprises had such positive views on cloud computing only a year ago. The US is less conservative than the rest of the world when it comes to public cloud adoption, however. It will take time for overall attitudes to evolve.

Users put public cloud use in context


Horror stories hyped up by the IT press, the conservatism of IT departments, and knee-jerk reactions from business executives make it difficult to have a balanced debate around public cloud QoS issues. Some enterprises are overly ready to trust public clouds, but others are paranoid. Many do not know exactly what questions to ask. Nevertheless, an increasing number of companies are working out how to adapt their use of public cloud resources to the QoS level that public cloud service providers are willing or able to provide. A whole industry is being built to enable them to do so, with workshops, guidelines, and other IT services offered by a variety of vendors. For those still unsure about the risks, insurance providers have started to develop cyber liability offerings specific to public clouds. However, both providers and users will make many mistakes along the way, ensuring a steady flow of horror stories and knee-jerk reactions.

56

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

This context will increasingly be that of supply chains, not just individual suppliers
Enterprises are increasingly aware that SLAs between service providers are as important as those between users and providers: a whole public cloud service ecosystem is being created as, for example, software-as-a-service (SaaS) vendors increasingly rely on infrastructure-as-a-service (IaaS) and PaaS offerings. The growing interdependency of service providers makes SLA guarantees and enforcement more complicated, especially when many providers do not want to face up to this complication. For example, when Tibco released its Tibco Silver PaaS offering on top of Amazon EC2, it downplayed SLA issues, focusing on its own offering instead of taking a more holistic view that would have included Amazon.

Users will have to improve the way they manage SLAs


SLAs underpin and define the provider/consumer relationship at the heart of the cloud computing notion. These relationships need to be formalized at all levels: between business and IT, between IT teams, and between IT and public cloud service providers. (They also need business units, departments, or both, to agree on how to prioritize the dynamic allocation of private and public cloud resources.) As a result, they need to manage an increasing number of SLAs and discrepancies between these SLAs: not just public cloud discrepancies (at both the supplier and supply chain level), but also private cloud ones. To do so, they need to set up new systems or bullets. Systems: to monitor and enforce SLAs across public as well as private clouds. For example, such systems would prevent data from being transferred to the wrong location. Processes: procurement processes need to be redesigned to cater for the specific requirements of SLAs in the cloud. These processes need to target the specificity of the main types of public cloud (IaaS, PaaS, and SaaS). SaaS, for example, is sometimes procured by business users who lack the knowledge to ask the right SLA-related technical questions.

Public cloud providers need to manage the gap between QoS hype and SLA reality
The gap fuels skepticism
Many enterprises are still skeptical about public cloud providers QoS promises because of the gap between these promises and the SLAs that these providers offer if they offer any. The terms and conditions of these SLAs limit their scope and compensation. Scope: most SLAs only guarantee uptime, not performance, for example. Scheduled maintenance is not considered downtime and, in some cases, neither are outages of less than a specified duration (e.g. ten minutes). Compensation: many limit refunds to credits against future charges. The credits are either pro-rated as time lost against uptime promised, or limited to small percentages of monthly fees. In many cases redress, if any, is reactive rather than proactive: customers have to ask for it. Many SLAs are open-ended contracts. Providers can withdraw their services at will or make changes to their offerings without end users having any say.

The gap will take time to narrow


Most of the criticism targeted at public clouds can also apply to other outsourced service offerings. Managed services, for example, have similar scope and compensation limits. Their one-to-many business model, like that of public cloud providers, reduces SLAs to the lowest common denominator. However, even in the tailor-made one-to-one world of outsourcing, it took about 20 years for contracts to reach an acceptable balance between providers and users interests. It could take that much time for the same balance to be achieved in the public cloud sector.

CHAPTER 4: CLOUD COMPUTING QUALITY OF SERVICE IN PERSPECTIVE

57

The ten-year-old SaaS market is the most mature of all three public cloud service markets, but its SLA record is still mixed at best. The IaaS and PaaS markets have yet to catch up with SaaS. Major IaaS and PaaS players such as Amazon and Google have a low SLA starting point, owing to their: consumer-sector roots willingness to keep their offerings in beta form for a relatively long time pile them high, sell them cheap approach to the market.

The gap has already started to narrow


Amazon and Google, among others, have already started to improve their SLA terms under the influence of: An increasing focus on the enterprise market. New start-ups that want to challenge incumbent public cloud providers. Incumbent vendors of software, IT services, telecoms, etc. that seek to become public cloud providers based on their record as enterprise vendors. Their focus on QoS enables them to avoid competing on price. Implementation partners such as systems integrators that have a track record when it comes to understanding the QoS requirements of enterprise customers. Industry bodies such as the newly created, UK-based Cloud Industry Forum, which has put together a draft public cloud provider code of practice that promotes transparency and accountability. Having been available for public consultation from 26 April31 May 2010, the code of practice will be published in autumn 2010. Customer pressure: the larger the enterprise, the more expertise and influence it has to investigate and challenge SLAs. Overall, enterprises expectations and negotiating skills will rise with usage and result in more user-centric terms and conditions. Some organizations are already getting results. For example, when the Los Angeles City Council decided at the end of October 2009 to award Google a $7.25 million, five-year contract to put its 30,000 employees on a SaaS email offering, the organization added to the contract a provision for compensation in case of a data security breach. SLAs will improve as competition increases. Public cloud vendors will become increasingly: Transparent when dealing with QoS/RASS breaches: the best will move from reactive to proactive, and then predictive. Transparency will strengthen customer relationships rather than undermine them, as the more transparent the vendor, the easier it is for customers to take QoS breaches in stride. Enterprises understand that problems are inevitable; what they want is for public cloud providers to take responsibility for and learn from their mistakes. Clear and detailed in the way they define SLA metrics: metrics that will broaden to include more QoS areas, such as support and system response time, not just uptime. Competitive when it comes to compensation (within limits, as overly generous public cloud providers will quickly find themselves out of business): refunds will remain unlikely, but credits will increase. Currently their SLAs only target their data centers. End-to-end SLAs that cover both network and data centers will take time to emerge.

Public cloud SLAs will standardize and diversify


SLAs will standardize
There is currently no accepted best practice for how to approach each of the main QoS elements (RASS) from an SLA point of view. We expect SLA terms and conditions to standardize across public clouds as the market matures and consolidates, but it will take time. As part of the process, SLAs will increasingly: Rely on certification schemes such as that promoted by the Cloud Industry Forum. Certification will enable public cloud service providers to simplify their SLAs and back their SLA promises with accreditation. Converge with private clouds as well as traditional IT and telco services SLAs.

58

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Become more business-related. Vendors will express SLAs in business terms, not just IT-related terms, and define them from an end-user point of view, not just an IT standpoint.

SLAs will diversify


Public cloud service providers will not only standardize and simplify the way they define and express SLAs, but also fine-tune them to the needs of various market segments. As they seek, along with their customers, to find the right balance between lower cost and higher QoS, they will adapt SLAs based on a variety of criteria such as customer profile (at the low consumer end of the market, lower costs will continue to trump QoS) and usage (for example, whether an application is mission-critical). At the end of the day, customers will simply receive the QoS they paid for. There will always be a gap between the way providers and users relate cost to SLA level, however. The development of a wider continuum of offerings, ranging from those created under a pile them high and sell them cheap approach with low SLAs to more expensive enterprise-level QoS-backed services, will require service providers to improve their SLA management and monitoring facilities.

SLAs are central to the notion of private and hybrid clouds


QoS concerns fuel the rise of private clouds
The past 18 months have seen a significant shift in focus away from public clouds and towards a new concept that of private clouds. It stems from a powerful mix of vendor push (to sell the latest hardware and software and respond to users QoS concerns) and user pull (from business executives wary of public clouds QoS credentials and IT executives keen to remain in control of IT and in possession of their jobs).

Private cloud SLAs will have to keep up with the public cloud Joneses
Business users are increasingly comparing internal QoS levels with the levels of service delivered by public cloud providers. As a result, the goal of private clouds is to turn internal data centers into more secure, scalable, reliable, and available shared dynamic pools of hardware and software assets backed by stronger, more flexible and more explicit SLAs than those currently offered (if any). Turning a data center into a private cloud will take time and will reuse many of the RASS technologies and best practices of public clouds.

Enterprises mitigate public/private cloud risks with hybrid mix


An increasing number of companies are learning how to adapt their use of public and private cloud resources to the QoS level that both external service providers and internal IT are willing and able to provide. In the process, they mix both private and public cloud elements to find the right balance between risks and benefits, leading to the rise of: shared private clouds, whereby several companies share the same cloud services, which are usually provided by a third party virtual private clouds, whereby public cloud providers designate part of their public clouds for use by a single company.

4.3 Security is the number-one cloud QoS concern


Trust in public clouds is growing, but security concerns fuel interest in private and hybrid clouds
Enterprises find it difficult to grapple with public cloud security issues
Of all QoS concerns, security is the biggest hurdle on the way to public cloud success, as illustrated by a number of surveys. Public clouds are deposits of a wealth of software and data that can easily be turned into real money. They are open to cyber attacks, but at least as many efforts are made to defend as to loot them. Caught in the middle, users are not sure whether they can trust public clouds yet especially since the stakes are high, not just in terms of financial losses but also damage to brand and reputation.

CHAPTER 4: CLOUD COMPUTING QUALITY OF SERVICE IN PERSPECTIVE

59

Users need time to familiarize themselves with cloud-specific security challenges. Some security risks are similar to those encountered internally or with more traditional IT outsourcing services. Public cloud services introduce more potential vulnerabilities as a result of sharing these services and the possibility of subcontracting parts of them, among other things.

Despite the hysteria surrounding security issues, cautious trust is growing


Security is an issue around which a lot of FUD was traditionally created, fueled by a variety of highprofile incidents such as an attack on Googles infrastructure by Chinese hackers in late 2009, which led Google to withdraw from China. Despite the FUD (and the growing reality of cyber attacks), attitudes have evolved significantly, although enterprises are still cautious. As with any other budding IT technology, they start with small projects and build on trust and track records. To speed up the process, various efforts are being made, such as the Cloud Security Alliance launched in March 2009, which has produced best-practice security guidelines. Public cloud providers also make their own security guidelines available.

Security concerns fuel the rise of private and hybrid clouds


Vendors have been quick to respond to enterprises security concerns with private and hybrid cloud alternatives and complements to public clouds. Private clouds enable enterprises to remain in control of their IT assets in order to maintain security. The enterprise has a clear view of the security regime in place, and can make an audit at any time. Indeed, certain types of security control, particularly at the network level, cannot be applied to public clouds. On the other hand, the rise of private clouds creates new security challenges. For example, most traditional approaches to security assume that IT assets are physical rather than virtual. This is a problem, considering that private clouds are based on the pooling of IT resources using virtualization technology. Hybrid clouds enable enterprises to mix private and public cloud elements to remain in control of at least part of their IT assets. (For example, the data remain internal but the application moves to a public cloud, or vice versa.) Public cloud vendors are keen to meet these hybrid requirements through a variety of approaches. In the IaaS space, service providers offer virtual private network (VPN)-based virtual private extensions to their offerings. (Amazon announced its own in August 2009 see the Ovum opinion piece Amazon adds support for hybrid clouds.) In the SaaS market, service providers partner with companies that offer extensions to their offerings. For example, Salesforce.com, which does not yet have a data center in Europe, partners with PerspecSys to serve those unwilling or unable (for regulatory reasons) to transfer data to the US. The Salesforce.com edition of PerspecSys Cloud Data Governance Solution allows companies to run Salesforce.com applications in the cloud while storing private and sensitive data at home.

Public cloud security needs more work


Public cloud providers have a holistic approach to security
Public cloud service providers counter security fears with assertions that they have secured their offerings against both internal and external threats at all levels: The mega data centers that underpin public cloud services: they do so via a mix of technologies (biometric authentication, mantraps, etc.) and processes (for example, to limit the damage that malicious insiders might cause). The software platform and applications delivered as a service: at this level, security is (or should be) built into the development process, design, and various technologies that underpin these platforms and applications. Design is key, with many public cloud providers considering security to be as much a byproduct of good architecture design (for example, network-based domain isolation) as a goal in itself. Design and technology choices have different security impacts. For example, the security requirements of a multi-tenant platform are different from that of a platform that relies on virtualization technology to deliver SaaS.

60

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

The data and metadata (while in transit, in storage, and in use) processed and generated by public clouds: public cloud providers combine a variety of technologies, such as VPN, data leakage prevention, enterprise digital rights management, multi-location backup, and encryption. Public cloud access points: public clouds use service access controls based on identity and access management (IAM) technologies to ensure that only authorized users and applications gain access to the relevant functionality and data. Securing their various application programming interfaces (APIs) is also becoming a priority. Public cloud supply chains: to ensure that no provider in the chain weakens the secure provision of cloud services, the supply chain includes security-centric service providers such as those that provide single sign-on or identity management services. Based on the various resources at their disposal in terms of technology and expertise, public cloud providers claim that they are at least as secure as most internal data centers. They point out that the specialization, homogeneity, automation, and centralization that public clouds offer increase security. They can remain tightly focused on their particular offering, which they can easily and rapidly update or upgrade in case of a problem, whereas enterprise IT staff have to take a more generalized approach. Providers also rightfully assert that, while some public clouds have been used as platforms for malware, they are mostly used as platforms for a new generation of security-related services (such as identity management services, disaster recovery, antivirus, and application security testing) to secure both private and public clouds. For example, SaaS application security testing provider VeraCode achieved a sevenfold increase in revenue bookings in 2009. Most of its revenue comes from large organizations, mainly in the financial services and government sectors, that demand that a security code specialist independently examines software (for both on-premise and SaaS configurations) before they will buy it.

Public clouds security credentials will take time to mature


Overall, public cloud security needs to improve at a variety of levels: Security measures: for example, public clouds currently do not have particularly strong authentication and encryption capabilities. They also use technologies such as virtualization, ID federation, data privacy, and API security, which need to mature and require better standards. The growing complexity of public cloud supply chains complicates things further. Security information: many service providers are reluctant to supply details about the way that they secure their offerings, hanging on to the discredited myth that security is impaired if you disclose your security procedures. They should shift from assurances to accountability and be more open on a variety of issues such as internal procedures (for example, the power they give to system administrators), access to log files, and the security technologies they have selected. Standards and best practices: these are lacking. Public cloud providers usually mention SAS 70 Type II and ISO 27000 standards, but these are not cloud-specific and can be implemented in a piecemeal fashion. Public cloud service providers are slowly improving at all levels, owing to the barrage of scrutiny to which they are subjected. They have started cross-industry efforts such as the CloudAudit initiative, which aims to provide a common interface and namespace for public cloud providers to automate the Audit, Assertion, Assessment, and Assurance of their environments via an open, extensible, and secure interface and methodology. Overall, however, their efforts will take time to mature. In the meantime, enterprises need to familiarize themselves and keep up with the security measures in place.

Public cloud security begins at home


Public cloud security is an extension of internal security efforts
Public cloud user security should rely on strong internal security practices to avoid public cloud users becoming the weakest link in the public cloud security chain. (This is currently the case, as public cloud use is often on CIOs and internal security teams radars.) It should be woven into the multiple layers that underpin internal security continuous monitoring, data encryption and backup, and controlled virtual machine image creation, for example. The challenge for public cloud providers is to make sure their security measures are not only effective, but also usable: they should not require internal security overhauls.

CHAPTER 4: CLOUD COMPUTING QUALITY OF SERVICE IN PERSPECTIVE

61

Security software vendors are keen to enable enterprises to move towards a hybrid cloud-centric view of security. For example, Novell asserts that for the cloud to become an extended part of their infrastructure, enterprises should use the same security and access control technology, model, and interface for the public cloud as they use internally. An increasing number of people (including representatives of the UK government talking about the countrys government clouds) point out that public clouds provide an opportunity to take a good look at internal security procedures, weave security more tightly into the fabric of IT processes, and redefine the mix of skills needed to achieve IT security. Too many enterprises do not have the governance processes and metrics in place to track the effectiveness of their security efforts.

Users security efforts depend on the type of public cloud


The level of required security varies depending not just on the type of data or application involved (the more sensitive/mission-critical, the more security required), but also on the type of public cloud. The higher up the public cloud stack, the less responsibility there is for customers when it comes to security. IaaS service providers are responsible for the security of the compute, storage, and network resources they provide. Customers are in charge of the rest namely, securing virtual images and their content (operating system, applications, and data). Developers need to build their virtual stacks carefully, removing any unnecessary software and opting for a closed configuration policy by default. System administrators need to keep an eye on their applications to detect any sudden changes of traffic pattern that might indicate a security breach. PaaS service providers are responsible for the security of the application development tools and runtime platforms they offer. Customers are responsible for securing the applications they develop for and run on PaaS offerings. PaaS helps in that it constrains the design and development of applications in a way that may generate more secure code. It also offers built-in security features (such as federated identity services) as well as APIs to connect to third-party security services. SaaS service providers are responsible for securing the entire stack, from infrastructure to application and data. They rely mostly on the security capabilities of their underlying database technology, which has had time to mature security-wise. (IaaS providers, on the other hand, have to grapple with the immaturity of virtualization technology.) However, SaaS customers are responsible for IAM-related security as well as the governance of the data processed by the SaaS application.

Public cloud security requires a redesign of current security approaches


Many companies have already extended the perimeter of their security defenses to cater to the security needs of mobile devices and customers, as well as to partner within their supply chains. Public cloud providers point out that public clouds allow security to be delivered more effectively across supply chains. However, public clouds do not so much extend internal security defenses as deconstruct them, moving them towards a much more decentralized security model, and thus requiring different technologies. However, the technologies and standards needed to underpin this model either are nonexistent or need to mature. This decentralized model relies on hybrid clouds so that, for example, encryption keys for public clouds can be stored in private clouds.

Public clouds increase the importance of data security


There is a growing consensus that public clouds provide the IT industry with an opportunity to redefine the way it approaches security by shifting its focus from application to data. The objective is to wrap security around data so that security measures can be applied to data regardless of their location across public and private clouds, depending on data type, sensitivity, and use, among other criteria. Public cloud providers are already working on making this vision a reality, but it will take time. In the meantime, user organizations can help by putting their data centers in order. They need to take a hard look at the types of data they have in order to adapt their use of public clouds to the sensitivity of their data. They need to distinguish between sensitive and non-sensitive data, and various levels in between an ongoing exercise that few companies have gone through.

62

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

These providers also need to take an interest in the way that data are secured across the data lifecycle (data creation, use, sharing, storage, archiving, and destruction). In the minds of many users, the whole point of outsourcing resources to public clouds is to forget about the operational details of these resources. This is a luxury that firms cannot indulge in. As societies and economies grow digitalized, data governance legislation is likely to tighten rather than relax, making it impossible for enterprises (SMEs included) to wash their hands of this issue. They need to pay particular attention to the following: Data location: enterprises need to know where their data go and what they go through. (As a result, data location transparency is one of the main issues emphasized by the open cloud manifesto published in early 2009.) Data replication technologies, processes, and compliance: user organizations need to understand how public clouds replicate data in order to secure them. They need to know what type of synchronization and recovery technologies are used; whether the locations are independent enough so that if one fails, it does not create problems in the others; and what the data retention and transfer policies are to make sure these do not contravene privacy and data transfer-related legislation and so on. Data confidentiality: some public cloud providers give themselves the right to peek into enterprise data for a variety of reasons, such as advertising or to avoid discriminatory content. Enterprise users need to be aware of these rights and curtail them if necessary (or if possible). These concerns are less related to digital identity and online privacy issues (which are key to the consumer side of public cloud services) than to the protection of intellectual property and trade secrets. Data return policies: enterprises need to consider not just entry, but also exit strategies, and how best to move their data elsewhere should they decide to terminate their use of a particular public cloud offering or should the service provider become unable to provide the service they signed for. They need to seek contractual guarantees that all of their data will be returned promptly and in the desired format at the end of a contract, and that all backup copies will be destroyed.

Public clouds require strong identity and access management


Strong IAM (identity provisioning and de-provisioning, authentication, federated identity management, user profiles, and access control policy) needs to be implemented by both users and service providers to avoid dangers such as account or service hijacking. This is a challenge on both sides. Many public clouds, for example, use flat access controls based on weak authentication credentials (basically user ID and password). The best option is federated standard-based IAM practices such as assertion-based access using security assertion markup language (SAML), which transfers the burden of authentication to the service that issues the assertions. User organizations should be able to specify access permissions to a deep level of granularity so that individual users can be limited to subsets of data and service functions. Help is at hand from an increasing number of single sign-on or identity management service providers.

Compliance is more of a concern than security


Security, not compliance, should come first
Security concerns are often confused with compliance concerns when it comes to clouds. Security compliance requires organizations to be secure and to demonstrate this. Even where there is no need to satisfy formal external audit requirements, the need to demonstrate security is almost as pressing as the need to apply security controls to establish business confidence. Because of the current lack of transparency, demonstrating security is a weak point in public cloud services. Regulatory compliance requires organizations to obey a variety of laws, many of which are specific to countries or vertical sectors, or both, such as the Payment Card Industry Data Security Standard (PCI DSS) and the US Health Insurance Portability and Accountability Act (HIPAA). These add a variety of twists to generic security (compliance) issues. Here again, the current lack of transparency does not help. However, as an increasing number of providers become vertical-market-specific, they will seek compliance with vertical-market-specific legislations based on a mix of their own efforts and cooperation with customers and auditors.

CHAPTER 4: CLOUD COMPUTING QUALITY OF SERVICE IN PERSPECTIVE

63

A clearer distinction between the two types of compliance allows public cloud service consumers and providers to put security compliance ahead of regulatory compliance. Most focus too much on the latter at the expense of the former. Instead, security should come first: once secure, IT assets can then be made compliant. Consumers also need to understand how the two issues relate to one another. For example, when Amazon acknowledged in 2009 that enterprises could not run PCI Level 1-compliant applications on its compute and storage IaaS offerings, too many people believed that these offerings (EC2 and S3, respectively) were not secure. That is not the case. The non-compliance stems from Amazon not allowing on-site audits, not from its offerings not being secure.

A more balanced approach to risk management is required


A clearer approach to compliance, as part of a holistic view of business and IT risk management, would give enterprises a more balanced approach to public cloud risks and opportunities. This would help them understand why and to what extent public cloud opportunities are worth the risk. As ISACA pointed out when discussing its recent survey of 1,809 US IT professionals, risk management is too much about public cloud compliance and not enough about the innovation and business change that public clouds can enable.

Certification is key to cost-effective and balanced security and compliance


Public cloud providers and users are in the process of finding the right balance between the need for security and compliance and that for low cost and innovation. The more secure and compliant the public cloud, the more expensive (and less flexible) it becomes. Costs are also driven up by the increasing right to audit (RTA) demands of enterprise customers. As a result, both sides see certification by independent bodies as a way of alleviating individual customer security and compliance demands. Certification should apply to complete public cloud supply chains when the public cloud service is delivered using a mix of offerings.

Certification schemes will take time to emerge


These certification schemes are not yet in place. They will require auditors to re-evaluate compliance criteria to achieve the reasonable certainty they seek. Auditors are much less willing to do so than is claimed (or expected) by public cloud providers. While auditing practices and standards mature and adapt, many public cloud providers will have to self-audit or rely on limited cloud audit services such as those offered by the likes of HP and IBM. For example, HP Cloud Assure, launched in early 2009, tests the security, availability, and performance of cloud services. Security testing includes code scans and penetration testing. However, it mainly focuses on the external access control aspects and does not validate internal operational procedures.

The public sector is a key participant in the security and compliance debate
Governments are still grappling with the cloud computing phenomenon
When it comes to security and the Internet, governments are users, regulators, and cyber-attack protectors as well as initiators. The 21st century is one of cyber warfare, and war, in all its various guises, is a governmental affair. However, as with actual war, the current problem is less to do with nation states than hard-to-anticipate rogue terrorist movements. As a regulator, the public sector is also a key participant in the compliance debate. However, the public sector finds it as hard to grapple with public cloud security and compliance issues as enterprise users do. In March 2009, for example, the US Federal Trade Commission, the Organisation for Economic Co-operation and Development, and Asia-Pacific Economic Cooperation held a meeting to discuss cloud computing security. The overwhelming message from the delegates was that there was no consensus yet on the best way to regulate cloud services, which are rapidly becoming globalized. Efforts are still mostly national and piecemeal, with many governments remaining uncertain of the best way to approach public clouds. As usual, the US leads the way with centralized efforts to define a national security and compliance framework for public, private, and hybrid clouds.

64

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Their role is particularly important when it comes to data governance


Data governance is particularly important to public cloud providers and users. This is due to a series of data-related laws such as those that focus on: restrictions on holding data in foreign jurisdictions (as in the US Patriot Act and EU Data Protection legislations) data breach notification (which requires organizations to notify data subjects that may have been affected by a data leakage incident) data privacy, especially with customer data increasingly being used as a source of revenue. Public cloud providers want some of these laws (such as restrictions on holding data in foreign jurisdictions) to be loosened and others (such as data privacy laws) to be strengthened to boost public cloud adoption. For example, the US coalition Digital Due Process, which is supported by the American Civil Liberties Union, eBay, the Electronic Frontier Foundation, Google, and Microsoft, among others, would like a review of the 1986 Electronic Communications Privacy Act to boost data privacy. Users are taking their fate into their own hands. For example, when the Los Angeles City Council decided to move to Googles SaaS email offering in October 2009, it requested a contract that would prevent Google employees from accessing the content of its emails. Besides making sure public cloud providers apply restrictions on the holding of data in foreign jurisdictions and employ data breach notification and data privacy laws, enterprises are concerned with the following: The release of their data to government agencies: in the US, for example, although public cloud providers do not own customer data, they can be forced to release them. (They can also be stopped from notifying the relevant customers if this happens.) While there are legal means to force enterprises to release data stored internally, enterprises are more likely to resist as long as possible something a public cloud provider would not necessarily be willing to do. In addition, public cloud providers may not have the audit data required to satisfy legal investigations, making things more complicated for enterprises embroiled in a legal fight. As a result, even those enterprises willing to trust public cloud providers need to make sure their contracts offer a clear electronic discovery framework. The law applicable to their data (as well as their procurement contracts): this is a standard IT issue that takes more importance in public as well as hybrid clouds. This is due to the possibility for the clouds and their data to span multiple jurisdictions, some of which have unwelcome twists to the application of the rule of law.

Finding the right balance between regulation and economic development


Public cloud service providers claim that public clouds deliver IT as a utility but are determined to avoid governments regulating them to the same extent as other utility sectors. Governments are split on the issue between two responsibilities: As regulators, they intend to boost public cloud security and compliance with additional, more specific regulation, as well as via support and participation in security, identity, and privacy cooperation efforts. As national economy managers, they want to attract public cloud data centers. Some are already rethinking old rules to achieve this objective. For the Netherlands and Canada, for example, there are arguments in favor of relaxing restrictions on holding data in foreign jurisdictions if the data are encrypted. There may be setbacks on the way, however. For example, the EU is currently reexamining the US Safe Harbor agreement and may end up tightening the rules that govern the transfer of data between the US and the EU. Similarly, in early 2010 the Indian government announced its intention to tighten data location regulation to prevent enterprises from storing taxrelated data offshore for tax-evasion purposes.

CHAPTER 4: CLOUD COMPUTING QUALITY OF SERVICE IN PERSPECTIVE

65

4.4 Reliability and availability are under increasing scrutiny


Reliability and availability are growing public cloud concerns
Public clouds assume constant failure
Public clouds position themselves as reliable and available based on the enterprise-grade quality of their hardware and software components, the design of their architecture, the adoption of best practices, the deployment of edge-of-network infrastructure (such as content delivery networks), and so on. The design of their offering assumes that components will fail and, in order to deliver on their reliability and availability promise, providers weave into their offering a level of redundancy both within and between data centers that few private clouds and traditional hosting solutions can match.

Public clouds have a relatively good reliability record


As a result, public cloud service providers have a relatively good reliability and availability record. For example, Salesforce.com has been around for more than ten years, and its SaaS offering may have suffered from a few hiccups, but it has had no major breakdowns. Similarly, Amazon Web Services (AWS) IaaS offering has never gone down as a whole only in parts. AWS provides its customers with the concept of an availability zone across which they can spread their application. Availability zones are insulated from failure in other zones (and help compliance by limiting data to a particular zone).

Public clouds can fail


However, public clouds regularly go down: sometimes indirectly (the fault is in the Internet connection, not the public cloud itself), sometimes directly. Providers have track records of 99.9%-plus uptime, but only when averaged over long periods. Significant outages happen, and there is no evidence that this will stop. Some observers argue that outages have simply been the result of growing pains and that, as the industry matures, reliability and availability will improve. Others point out that, as the number of public cloud users increases, public clouds reliability and availability limitations will become increasingly apparent. The truth is probably between these two perspectives. It will take time for public clouds, and the Internet itself, to mature enough to provide enterprises with the level of reliability and availability that they require for their mission-critical applications. For example, many public cloud offerings, such as Microsofts Azure, currently only offer 99.9% uptime, significantly short of the 99.999% that enterprises require for their core mission-critical applications.

Users are increasingly concerned about reliability and availability


As public cloud usage increases and public cloud outages continue to happen, enterprises will: Put public cloud providers reliability and availability claims under increasing scrutiny especially since any public cloud failure could impact a large number of enterprises. Private clouds may be just as unreliable, but their failures impact far fewer companies at a time. Prepare for the pains inflicted by public cloud outages via temporary fallback procedures as part of an overall public cloud risk assessment exercise. Of course, these procedures must be limited because the expense of installing elaborate IT backup systems would defeat the purpose of public cloud services. They will vary based on how mission-critical the various services and applications are. Agree to SLAs that not only define providers responsibilities but also limit consumers rights to reliability and availability assurances should their use of public cloud resources reach certain thresholds.

66

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Public cloud providers need more transparency


As public clouds continue to go down regularly, to counter the cloud is no good hysteria service providers are becoming increasingly responsive and transparent. Many are quick to put their hands up, apologize, and explain the problem and what is being done to solve it. This proactive attitude goes a long way in maintaining and nurturing trust although public cloud providers could hardly have done otherwise. Customer reactions are so quick and large-scale, and the press so keen on their doomsday scenarios, that proactive management of the problem is the only sensible course of action. However, public cloud providers should also be more proactive in sharing reliability and availability data so that enterprises can anticipate issues rather than having to deal with them as they start using cloud services. The same applies to scalability and performance issues. The service records that suppliers currently post on their websites (such as Amazons Service Health Dashboard, Googles App Status Dashboard, and Salesforce.coms Trust site) are useful but too basic and short to give any idea of reliability. The records posted for Google Apps and Amazon Web Services cover only the previous two months and 35 days, respectively, while Salesforce.coms service history runs back less than one month (but lists response times, which is an unusual bonus). The records also fail to provide latency data, and usually do not mention outages. Customers should ask suppliers to provide detailed and comprehensive histories of their uptime over at least the previous 12 months. They should request that providers be more proactive in notifying users of any failure. On the other hand, they should also understand that vendors are coy about outage-related information, not just because they want to minimize attention, but also because it is not easy to quantify outages. In many cases the recoveries are rolling, with many customers not affected for the length of the outage.

Reliability and availability are both a problem and an opportunity


Reliability and availability will take time to improve, but they will do so as public cloud providers strive to offer more expensive but reliable/available services (in a continuum that ranges from public to hybrid as well as private clouds). Availability zones are nice earners, for example, because customers have to pay extra for inter-zone data transfers. Service providers also generate increasing revenues by helping enterprises develop contingency plans (such as multiple Internet service providers with multiple Internet connections to public clouds, and regular backup of public cloud data). Compuware, for example, is raising the profile of its Gomez subsidiary via the beta CloudSleuth website, launched in April 2010, which provides reliability and performance information about a variety of public cloud infrastructure. It is broader in its coverage than the CloudStatus monitoring service, also currently in beta.

Reliability and availability are private and hybrid cloud objectives


The data center as it stands and should be
The concept of private clouds relates to reliability from two perspectives: The data center as it stands: many private cloud supporters argue that the average data center is more reliable than public clouds. The reverse, is, in fact, more likely to be true. The data center as it should be and could become: private cloud supporters aim to turn the traditional internal data center into a private cloud, namely shared scalable pools of assets available on demand, to improve reliability.

A hybrid cloud issue


Increasingly, the debate about public versus private cloud reliability is moving towards hybrid clouds, as enterprises are seeking to combine private and public clouds to achieve satisfactory reliability overall. From an application point of view, the objective is not just to work out which applications to keep internally (the mission-critical ones) and which to entrust to public clouds (such as web applications and collaborationcentric applications). It is also to split application components and decide which to keep internal and which to move to public clouds. In addition, it is about figuring out how to design applications that work both online and offline, something that is being considered for desktop applications in the consumer market but is likely to become more prominent for enterprise applications.

CHAPTER 4: CLOUD COMPUTING QUALITY OF SERVICE IN PERSPECTIVE

67

4.5 Scalability underpins cloud computings elasticity


Scalability is public clouds number-one feature
Scalability positions public clouds well against private clouds and outsourcing
When the QoS debate about public clouds tackles security, reliability, and availability issues, these are mostly defined as concerns. When the debate shifts to scalability, the conversation is dramatically altered: it is no longer about shortcomings, but benefits. Public clouds are sold on their ability to scale up (and down) dynamically, providing enterprises with the flexibility they need to innovate and take risks at a price they can afford. This flexibility is a key differentiator against traditional data centers and outsourced hosting offerings.

Public clouds have different scalability profiles


IaaS clouds provide a pay-as-you-go (PAYG) scalable infrastructure, but it is up to users to put together the right infrastructure components and design applications that can scale. The PaaS runtime handles scalability (scaling out across servers or up across processors), so developers do not have to worry about building scalability into their application code. The PaaS PAYG approach to pricing delivers on-demand elasticity. On the other hand, PaaS constrains developers to specific technologies and application designs. SaaS providers build scalability into their applications; however, the subscription approach to licensing adopted by most SaaS vendors limits the ways in which users can benefit from this. In return for discounts, most customers prefer annually negotiated contracts. These can be reviewed easily to add new users, but make it awkward to reduce the number of users.

Public cloud providers are open about their scalability recipe, within limits
Scalability cannot be an afterthought. It is an upfront design issue that is difficult and expensive to achieve. There are many ways to approach it (with the ability to use more computing, storage, or network resources and the ability to parallelize applications across multiple servers), and it can be delivered via a mix of design and technology choices, such as additional hardware, virtualization, data grids, and distributed caches. Public cloud vendors are willing to share details of choices they have made and provide guidance on how to make the best of their offerings from a scalability point of view. SaaS providers using IaaS and PaaS clouds are also relatively open to sharing their experience (for example, SmugMugs use of Amazon Web Services). On the other hand, many public clouds reach their scalability objectives via custom software (and, to a lesser extent, hardware) assets that they are not so open about discussing. These often consist of a custom software middleware layer aimed at squeezing the most out of basic, inexpensive (x86) hardware and (often open-source) software components.

Scalability is a goal for private clouds, too


Size matters
Existing internal data centers, when they are turned into private clouds, can compete with public clouds from a security, reliability, and availability point of view. This is much less the case when it comes to scalability, as the average data center is much smaller than the mega data centers that underpin most public clouds. That is one of the main reasons that an increasing number of companies are turning to shared private clouds, which provide a good midway solution between small internal data centers and mega public cloud data centers.

68

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Scalability via sharing


Scalability is one of the main objectives of the transition from data center to private cloud, achieved by turning existing IT assets from siloed resources into pools of standardized and consolidated resources to be shared. A good example is the blade farms that finance companies use for processing-intensive applications such as simulation (e.g. Monte Carlo risk analysis). These are being revisited (partly by reusing public cloud designs and technologies) to make their compute resources available for other applications not just simulation ones to use. The objective is to achieve better economies of scale, reduce usage variation, and maximize use. In other words: the fewer silos, the more sharing, and the more potentially scalable the infrastructure.

Different perspectives on scalability


Public and private clouds have different perspectives on scalability design issues. For example, private clouds need generic compute pools that can be shared to maximize usage, while public clouds need specialized compute pools (focused on a specific task, such as search, selling, and bidding pools in the case of eBay), which they scale independently from one another to maximize availability.

4.6 The road to scalable and reliable private clouds requires new thinking and skills
Public clouds open up new avenues
A renewed debate around how best to deliver QoS
Part of the public-versus-private-cloud debate revolves around which type of cloud is most able to deliver satisfactory QoS at all (RASS) levels. This has led to a renewed debate on how best to achieve QoS levels. There is: consensus on the need to use old, trusted software-engineering principles such as abstraction, separation on concern, loose coupling, and modularization (principles that underpin service-oriented architectures) disagreement on which technologies and technology implementations to choose, as exemplified by the debate concerning database partitioning versus federated database design. Like similar debates, it is extremely nerdy, but reflects a renewed, healthy focus on new ways to deliver QoS.

Public clouds challenge conventional wisdom on a variety of fronts


At the infrastructure level, public clouds are looking anew at the following: How best to package infrastructure components: in the past ten years, for example, the application server has emerged as the cornerstone of data center infrastructure. It packages a variety of services (such as transaction processing, integration, and process enablement) into a single, simple-to-use offering. In the brave new world of highly scalable and distributed public cloud infrastructure, this packaging is being reconsidered. The variety of services that application servers provide are being scattered across the infrastructure, a move facilitated by the modularization of application servers using technologies such as OSGi. How best to achieve transactional integrity: according to the CAP theorem defined by Inktomis Eric Brewer in 2000, only two out of three key distributed system design objectives (namely consistency, availability, and partition tolerance) can be achieved at any one time. Many public clouds focus on availability and partition tolerance at the expense of consistency. They reject the stricture of the atomicity, consistency, isolation, durability (ACID) approach in favor of the more laid-back basically available, soft state, eventually consistent (BASE) method, among other techniques such as compensating transactions, ubiquitous caching, and distributed replication.

CHAPTER 4: CLOUD COMPUTING QUALITY OF SERVICE IN PERSPECTIVE

69

Which database technology is best: over the past 20 years, the relational database model emerged as the dominant jack-of-all-trades, pushing alternatives into the periphery of the market. This trend is now reversing, however, as many public clouds have rejected the model in favor of new and old alternatives. Some of these developments are independent from but accelerated by cloud computing (such as the atomization of the application server services and the virtualization of software, storage, and input/output). At the application level, public cloud providers are leading the move towards stateless applications that relate to the underlying compute infrastructure via asynchronous, persistently queued events. This, in turn, leads to the redesign of data center infrastructures in a way that supports the new stateless applications as well as legacy stateful applications (and ensures that stateful applications deliver true linear scalability by making them more parallelized, partitioned, and distributed).

Public and private clouds will converge, within limits


Public cloud technology and design choices will not be fully adopted by private clouds
Adoption is a two-way street. It is as much about whether a particular IT offering is ready to meet the requirement of a particular market as it is about whether the market is ready to handle the new offering. Many expect the QoS-related technology and design choices of public clouds to move over to private clouds, but this will not happen to the extent that they predict. Many enterprises are not able or willing to do this yet. They may not have the skills and resources to cope with the magnitude of some of the changes proposed at least not quickly.

Private clouds will take one step forward


It will take time for many IT departments to move out of their comfort zones of relatively simple clientserver and N-tier systems to build more scalable shared infrastructures and the applications that run on top of them. However, many have already started to do so, often as part of service-oriented architecture (SOA) efforts. Public clouds encourage them to move further more quickly.

Public clouds will take two steps back


While there will be some crossover from public to private clouds from a QoS technology and design choices perspective, it is likely to be smaller than the other way around. Private cloud users will get public cloud service providers to standardize on the technologies and designs they use internally, rather than adopt the exotic approaches of some public clouds (although these approaches will remain a competitive differentiator for some public cloud providers). Public clouds will evolve towards approaches that are familiar to developers, or public cloud service providers will keep masking the complexity of their approaches (especially when it comes to PaaS offerings). A good example is Microsoft, which started with an exotic database technology before stepping back, under developer pressure, to support a more mainstream database technology.

Convergence will be fueled by hybrid clouds


Irrespective of whether public clouds will influence private clouds more from a QoS perspective than vice versa, it is undeniable that these two approaches are on a convergence course. This is accelerated by the rise of hybrid clouds, which combine public and private cloud services in a variety of ways. Hybridization will help public and private clouds find common ground.

70

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

4.7 Recommendations
Recommendations for enterprises
Put your use of public clouds in context
The debate about public cloud QoS too often revolves around extreme positions. The reality of public cloud usage is in the middle in the careful balancing of risks, costs, and benefits. Enterprises need to carefully assess the QoS requirements of the data, applications, and processes that they plan to move to and get from public clouds this is an ongoing effort that they need to weave into their IT governance efforts. A strong cloud computing governance framework would also enable them to manage their own side of the QoS bargain: responsibility for QoS, especially at the security and compliance level, is shared between providers and consumers.

Leverage public cloud SLA competition


SLAs will become a key competitive differentiator for public cloud service vendors. Make sure you take advantage of this competition to drive SLAs up (or down) to the level you require. Large enterprises will still be able to twist SLA terms and conditions to their advantage. Smaller entities will have to do with what they are offered and will need, as always, to understand all aspect of their providers SLAs. (For example, they will have to understand exactly what downtime means.)

Adopt a balanced approach to security


Do not accept public cloud providers claims of 100% security. Challenge their assertions and ask for details. However, do not be put off by security FUD either. Be on the front foot in terms of managing stakeholder perceptions and prejudices. Overall, public clouds are neither more nor less secure than on-premise data centers. However, they do raise challenges that need to be thought through carefully. Make sure you are not the weakest link in the security chain that links you to public clouds. You may want to start with applications that will either benefit from increased sharing of information or have relatively benign security and privacy profiles.

Always have a plan B


The way to ensure success with clouds is to plan for their failure. The cloud can evaporate, but there are a variety of options to deal with this issue, including cloud-based ones. This could take the shape of data synchronization so that you keep a live copy of your (important) data on-premise or in different clouds. It could also be delivered by application failover/migration capabilities to bring applications back in-house (for those limited-in-number applications that require such a drastic set-up) or move them to another cloud.

Adopt a holistic approach to public, private, and hybrid clouds


The move to public and private clouds, and to the variety of hybrids in between, has consequences when it comes to the technology and design choices that enterprises have to make. These choices are interconnected. Some public cloud choices will impact private clouds, and vice versa. Enterprises need to remain in control of these choices instead of leaving them to vendors and service providers or learning about the consequences of their decisions when it is too late to do anything about it. They need to develop new skills and capabilities to be ready to exploit the cloud (to develop stateless applications based on asynchronous messaging, for example).

CHAPTER 4: CLOUD COMPUTING QUALITY OF SERVICE IN PERSPECTIVE

71

Recommendations for vendors


Focus on the business outcome rather than technical issues
The current debate around public and private cloud QoS issues focuses on nerdy technical considerations. Public cloud service providers as well as those targeting private clouds should make more of an effort to frame the debate from a business perspective.

Improve and diversify SLAs


In addition to improving your SLAs, you should also seek the right balance between lower costs and higher QoS for each segment of the market, based on a variety of criteria such as customer and application type.

Be transparent and proactive


In order to build your reputation and track record, you need to be transparent with your successes and failures, as well as open and proactive in handling the latter. Make sure that you are regularly audited by a reliable third party.

Build your security credentials on openness


Welcome discussion of your general approach to security. Strength comes from community engagement, not just to boost security itself, but also to understand customer needs. Provide clear statements about security policies and reports, and be willing to share audit reports with clients. As part of these efforts, make clear statements about how your services meet compliance requirements, and how they can be integrated with the clients own applications in a way that is consistent with major regulations.

Help enterprises make the right technology and design choices


In order to deliver satisfactory QoS and SLAs, public cloud providers have made technology and design choices that enterprises may not be able or willing to make. They need to evolve towards approaches that are more familiar to developers, or that will mask the complexity of their approaches, especially when it comes to PaaS offerings.

Adopt lock-in risk-mitigation strategies


The provision of service portability, if not frictionless migration, is important for the success of individual cloud services and the entire IT consumption model alike. A degree of lock-in might be hard to avoid, particularly in PaaS services, so there are no excuses for failing to adopt lock-in mitigation strategies such as open-source licensing, code escrow, standardization on widely adopted frameworks, or interoperability with alternative platforms. You also need to influence the direction of related legislation.

72

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Incorporating

IT Management and Strategy Report

OVUM Butler Group

CHAPTER 5: Cloud governance: an overview

WWW.OVUM.COM

5.1 Summary
Catalyst
Cloud computing enables enterprises to: deliver IT resources via automated, virtualized, and service-oriented architecture (SOA)centric private clouds consume and combine IT resources from a variety of public cloud offerings mix and match private and public cloud services together into hybrid clouds. Cloud governance enables enterprises to exploit these new opportunities and tackle cloud computing in a systematic manner (instead of the piecemeal approach that currently characterizes most cloud computing-related initiatives). This systematic approach needs to be woven into current application lifecycle management (ALM) and SOA governance efforts as part of IT governances efforts to cross-reference and coordinate governance initiatives.

Ovum view
IT governance is necessary and difficult, and cloud computing (in the form of public, private and hybrid clouds) makes it even more so. It introduces an additional layer of complexity that enterprises need to control in order to make the most of its benefits (including lower costs and on-demand flexibility). However, cloud governance best practices and underlying tools are in their infancy, which makes it difficult to strengthen current ALM and SOA governance efforts in areas such as cloud-centric applications and application programming interface (API) management.

Key messages
Cloud governance builds on IT governance. Cloud governance delivers the right recipe from a variety of ingredients. Cloud governance, like IT governance, is a work in progress. ALM governance needs to expand to the cloud. Cloud governance builds on SOA governance.

5.2 Cloud governance builds on IT governance


An IT governance driver, among others
Similar to other disruptive trends
Cloud computing is the latest in a series of disruptive trends affecting IT departments. These include: SOAs: the redesign of IT systems into more modular components. Virtualization: the abstraction of IT hardware and/or software assets from one another. Outsourcing: the provision and/or management of an increasing amount of IT assets by third-party service vendors.

CHAPTER 5: CLOUD GOVERNANCE: AN OVERVIEW

75

Each of these trends, including cloud computing: cannot succeed without an effective IT governance framework that promotes and ensures coordination between IT teams promotes, and builds on, the other governance efforts.

Impacts all three IT domains


Like the other disruptive trends, cloud governance impacts all three IT domains: IT business management (ITBM), managed by the office of the CIO: this is the IT domain responsible for overall IT governance. It is at this level that IT executives define the business case for IT initiatives and manage business expectations as well as project/application portfolios, change, quality, enterprise architecture (EA), and procurement. ALM, managed by the application development (AD) team: this is the IT domain responsible for user requirement-driven software design, sourcing, coding, testing, build generation and deployment as well as versioning and configuration management. Following deployment, developers are also responsible for application maintenance, including updates and upgrades. IT service management (ITSM), managed by the IT operations (ITO) group: this is the domain responsible for the management and support of deployed applications. Its responsibilities include application monitoring and troubleshooting as well as helpdesk.

A shared and service-centric IT driver, among others


The more IT becomes shared and service-centric, the more governance is needed at all levels: IT, ALM, SOA, and cloud, among others.

Drives increased resource sharing


Both public and private clouds (and the various hybrids in between) aim to provide enterprises with pools of IT resources/assets for their various divisions, lines of business (LoBs), and departments to share. By cutting through infrastructure and application silos, SOA initiatives also make it easier to share infrastructure and application assets. By abstracting application and infrastructure assets from one another, virtualization facilitates this sharing further. In that context, IT governance needs to manage the transition towards increasingly shared IT assets. This requires: an enterprise-wide viewpoint that moves away from local-division/LoB/department issues to prioritize usage based on overall business requirements good conflict management skills and the ability to make compromises, as sharing is not always easy the alignment of cloud, SOA, and virtualization governance efforts.

Drives service-centric IT
Shared assets are increasingly delivered as a service. The evolution of IT departments from technology managers to providers of technology-based services is fueled by: ITSM, a top-down, business-driven approach to the management of IT that specifically focuses on the need to provide satisfactory service levels ALM, with the aim being to have developers create software that can live up to service-level agreements (SLAs) SOA that is both about service-centric software design (the splitting up of software into components that provide services to one another) and service-centric software delivery (whereby software components are combined and provided on-demand) cloud computing that offers on-demand service-centric IT (hardware and software) asset delivery in both private and public clouds, and thus redefines the way businesses see, use, and pay for IT. Service-centric software delivery (as well as service-centric software design) relies on a contractual approach to services that defines the context in which service providers and consumers relate to one another. The service-level agreement (SLA) is the mechanism that makes the service provider accountable to the consumer (accountability being a core governance tenet). In that context IT governance needs to make sure that the IT department:

76

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Successfully manages the transition to service-centric software delivery at all levels (ALM, ITSM, SOA, and cloud computing governance). Does not mistake the transition for the main objective. Service-centric IT is not the end but the means that underpins a top-down approach to IT focused on business scenarios and outcomes rather than a bottom-up approach focused on technology and vendor stacks. This is particularly relevant to cloud governance, where technology considerations often take precedence over business concerns. From a cloud computing governance point of view, the objective is to make sure that IT departments do the following: When acting as consumers (of public cloud services), they must insist on satisfactory SLA parameters and require strong SLA guarantees, besides putting the service provider through a due diligence evaluation process. When acting as providers (of private or public cloud services), they must deliver SLA parameters and guarantees that meet consumers requirements.

An IT governance federation member, among others


IT governance federates a variety of governance efforts
There will never be one master governance lifecycle that regulates all matters IT. Instead, IT governance cross-references and coordinates various governance efforts that overlap one another. As shown in Figure 5.2.1, these efforts include: organizational-centric governance initiatives; for example, ALM governance carried out by the AD group and ITSM governance carried out by the ITO group (with IT governance federating the two from the point of view of a cradle-to-grave approach to ALM) cross-organization governance programs such as SOA or cloud computing governance IT practice-centric governance efforts such as architecture, data, and security governance (critical to cross-organization programs such as SOA or cloud computing).

go

nce na r ve
SOA governance Cloud computing governance ALM governance ITSM governance Security governance Data governance

IT

er ov IT g

Figure 5.2.1: IT governance federates a variety of governance efforts

an c

Source: Ovum

CHAPTER 5: CLOUD GOVERNANCE: AN OVERVIEW

77

The objective is to build a positive feedback loop between these various governance domains. The complexity and difficulty of doing so is such that IT governance is not the aim but the journey. There is no big bang implementation but an approach similar to the Japanese Kaizen concept of gradual and orderly, continuous improvement.

Cloud computing governance must be woven into all IT governances federation efforts
IT governance federates other governance efforts with a variety of perspectives. Cloud computing governance should be woven into each of these. They include the following: Asset lifecycle management: IT governance promotes a lifecycle management approach to the various IT assets (hardware, software, information/data) from their creation or sourcing down to their disposal or re-purposing. Each governance effort has its own specific approach to lifecycle management, linked to the specificity of the assets being managed (for example, ALM and SOA apply lifecycle management to applications and software services, respectively). However, these various specific lifecycles can be correlated to one another on the basis of a generic lifecycle management approach, illustrated in Figure 5.2.2. Cloud computing governance relates to all lifecycles (hardware, data, software application, software service, virtual machine etc.). It expands these lifecycles from internal to external IT systems. When both private and public clouds are used, it ensures that the same lifecycle deliverables are created on both sides. Dependency management: cloud computing (along with SOA, virtualization, open source) is one of the reasons why IT departments need to keep up with increasingly integrated, portable, abstracted, and open IT assets. The more these assets prove so, the more (dynamic rather than static) dependencies there are to manage, orchestrate, and automate. The vertical dependency management approach of IT assets also helps map each step of horizontal asset management lifecycles to one another.

ITSM ALM
Application lifecycle

Inception

Construction

Provision

Operation

Change

Software service lifecycle

Inception

Construction

Provision

Operation

Change

SOA

IT/cloud computing governance


Figure 5.2.2: IT governance underpins IT assets lifecycle management Source: Ovum

78

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

EA governance: EA governance provides a system-centric backbone to federated IT governance efforts. In the case of public clouds, it stretches to architectures and IT assets managed by third parties. This requires a difficult cultural shift that will take time. EA governance is also important in efforts to turn (parts of) current data centers into private clouds. APM/PPM integration: federated IT governance relies on the integration of project portfolio management (PPM) and application portfolio management (APM) people, processes, and tools (which may be part of the same suite as EA tools). It also integrates PPM and APM with a variety of teams, processes, and systems outside IT such as the CFO office, investment portfolio management processes, and enterprise transaction systems and risk management systems to link all stages of the software application/service to their business context. When it comes to cloud computing, the challenge is for PPM and APM tools to expand to manage assets that are outside of the organization.

Cloud computing governance federation expands IT governance federation


Federation efforts, in the context of IT governance, focus on the coordination of various IT-related governance domains within the enterprise. In the context of cloud computing governance, meanwhile, they relate to coordinating inter-enterprise governance in the context of: public cloud service consumer/provider business-to-business interactions public cloud service provider/provider cloud-to-cloud interactions (the building of public cloud supply chains and integration of various public clouds via an intercloud infrastructure).

Like IT governance, cloud governance needs to identify the right objective(s)


Cloud governance should focus on enabling flexibility
Public and private clouds ability to make it faster and easier to procure or let go of (as well as develop, deploy and/or manage) hardware and software assets will make the biggest difference. Cost and quality of service (QoS) issues are critical, but cloud computing governance should not over-emphasize them at the expense of enabling enterprises to strike the right balance between effectiveness (low costs, relevant performance) and innovation (making it easier to try something new).

Private cloud governance should focus on ease of controlled internal procurement


Private cloud governance should focus on controlling as well as taking advantage of the ease of procurement and efficiency that private clouds deliver via a self service portal (so that it no longer takes weeks or months, only hours or at most days, to acquire/relinquish IT assets). It should prevent IT people from diving too quickly into the technicalities of delivering fully automated, virtualized, and SOAcentric data centers to understand what it takes to become IT service delivery centric.

Public cloud governance should focus on getting rid of shadow IT


Many enterprises have created SOA services and virtual machines without being ready to properly manage these new IT assets, and they are doing the same with public cloud services. A great proportion of these services have been adopted behind the CIOs back. They are part of a shadow IT infrastructure that is out of the control of IT departments. The main objective of public cloud governance should be to use a variety of means (carrot as well as stick) to bring these public cloud services back into the governance fold. As usual, the key problem is to reach the right balance between flexibility and control not just to align business and IT but also business users with one another, as public clouds make it easy for various LoBs to go their own way, reinvent the wheel several times over, and create a patchwork of incompatible systems.

CHAPTER 5: CLOUD GOVERNANCE: AN OVERVIEW

79

5.3 Cloud governance relies on the same ingredients as IT governance


The Ps of governance
Policies Plans Performance monitoring

Paraphernalia
(of systems, tools and technologies)

People

Processes

Figure 5.3.1: The Ps of governance

Source: Ovum

As shown in Figure 5.3.1, there are six main ingredients to any good governance recipe including cloud governance: people, processes, policies, plans, performance monitoring, and paraphernalia (various systems and tools).

People are the main ingredients of cloud governance


Cloud governance requires a center of excellence
As shown in Figure 5.3.2, governance is both applied to management (strategic corporate governance managed by the board of directors) and part of management (tactical corporate governance partly defined and wholly executed by the executive team). Like other cross-IT domain governance initiatives such as SOA governance, cloud computing governance requires the creation of a center of excellence independent from, but closely working with, the enterprises project/program management office. At a strategic level, the center of excellence draws together business as well as IT experts. At a tactical level it relies on resources from all three IT domains (ITBM, ALM, and ITSM).

80

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Corporate governance

IT governance

Cloud governance

Governance appraisal Board of directors Directors/ executives Governance definition Business executive team IT executive team Governance execution Cloud Center of Excellence
(IT focus)

Cloud Center of Excellence


(business focus)

Strategic governance

Tactical governance

Figure 5.3.2: Strategic and tactical governance

Source: Ovum

Cloud governance is about relationship management


Cloud computing impacts the way enterprises relate to: Their IT assets: it impacts the way they define, create, procure, and consume IT assets. IT departments: by freeing IT people to focus on key projects, it enables as well as undermines (by enabling users both within and outside IT departments to bypass established processes) enterprisewide IT strategies. One another: among other things, it makes available to SMEs what was formerly only available to larger organizations, and makes it easier for companies to share IT assets. Their IT vendors: cloud computing shifts the risk of investing in and managing IT to the vendors. This makes it a slow-moving phenomenon, as internal and external relationships evolve much more slowly than technology. Cloud computing governance is about managing the evolution of these relationships for controlled, managed, and incremental cloud adoption.

Cloud governance is about empowerment, not just control


The more structured and effective the governance, the more people can get involved in it, feeding a positive feedback loop of implementation and adoption. This requires a lot of support as well as clear communications, as inertia makes it slow for cultures to evolve, for people to move from being aware of the need for governance to implementing it. Governance is not just about control, and keeping an eye on individuals to make sure that they behave as expected, but also about empowerment, based on a realignment of objectives and incentives to encourage behavioral change. The notion of empowerment (via self-service) is particularly important to cloud computing.

CHAPTER 5: CLOUD GOVERNANCE: AN OVERVIEW

81

Public cloud governance is about managing IT department evolution


Public clouds will not kill IT departments but shift their focus: from IT provisioning and implementation tactics towards a more logical and strategic view of IT (turning IT people from IT implementers into IT project managers) towards a more holistic approach to the way they connect network, hardware, and software issues, hence the need for EA governance; infrastructure-as-a-service (IaaS) pioneer Amazon, for example, created its IaaS offering as part of a process to boost interactions between network engineering and AD people from a focus on producing services/applications to a focus on mixing and matching these services/applications into the right processes from a focus on maintenance to a focus on value and innovation, with public clouds taking responsibility for commodity IT assets and IT departments concentrating on those IT assets that define competitiveness towards more risk-taking, because public clouds reduce the cost of doing so and free IT people to pay more attention to risky but high-reward ventures towards control sharing: sharing IT control with public cloud service providers as well as with partners in shared (virtual) private clouds. Public cloud governance is about managing and controlling these various transitions.

Private cloud governance is about enabling private clouds to keep up with public ones
The past 18 months have seen a significant shift in focus away from public clouds towards a new concept that of private clouds. This stems from a powerful mix of vendor push (to sell the latest hardware and software and respond to enterprises concerns about public clouds) and user pull (from both business executives wary of public clouds QoS credentials and IT executives keen to remain in control of IT and in possession of their jobs). IT executives cannot ignore public clouds, however, as internal IT cost and QoS will be increasingly compared to that of public clouds. From that point of view, private cloud governance is about: managing the evolution of internal IT from the mundane data center to the state-of-the-art private cloud and making sure that private clouds can live up to public clouds challenge in areas such as QoS as well as speed and ease of procurement positioning internal IT services as competitive alternatives as well as complements to public cloud services (and IT executives as competitors as well as trustworthy managers of these services).

Policies, plans, performance monitoring and processes are the backbone of cloud governance
As shown in Figure 5.3.3, governance connects high-level strategy objectives to low-level project implementations via policies, plans, and performance monitoring.

82

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Misson

Strategy

Policies

Adjustments

Governance domain

Plans

Monitoring

Targets

Projects

Figure 5.3.3: IT governance connects strategy with projects

Source: Ovum

Policies define the compliance context


Policies are the backbone of any governance process. When it comes to IT governance, the main objectives are to ensure compliance as well as satisfactory cost and QoS levels. The same applies to cloud governance: The aim of private cloud governance is to build on internal data centers QoS credentials and improve on their cost definition and management capabilities. Many enterprises are not able to calculate then allocate costs, despite being willing to do so, as they do not have the infrastructure, processes and metrics in place. The aim of public cloud governance is to take advantage of the subscription and/or pay-as-you-go (PAYG) payment facilities to help cost control and match usage with QoS levels (with a particular focus on security and data governance policies).

CHAPTER 5: CLOUD GOVERNANCE: AN OVERVIEW

83

Plans define the governance roadmap


While a strategy defines the why, plans define the where (to go from and to) and (for each step of the way) the what, how and who. They are roadmaps that help enterprise manage: the transition to cloud computing each cloud computing project the lifecycle of each cloud service. The first step of any plan is to assess current status to identify where to start and what to prioritize, and to define and coordinate tactics and objectives. This includes resource allocation, as well as change and impact management.

Performance monitoring enables governance efforts to adapt


Performance monitoring ensures plans are on target and enables those in charge of governance to analyze usage patterns and adapt to them by making adjustments to projects and strategy. It is key to the management of SLAs that define the relationship between public as well as private cloud service providers and consumers. This management needs the capture and correlation of performance metrics across different IT layers a tough challenge for both private and public cloud service providers.

Process automation delivers consistency and enforcement


Governance requires process automation to ensure consistent policy and plan implementations. The challenge is to find the right balance between tight enforcement (policy control via automation) and light guidance (empowerment to achieve governance objectives without too much constraint on how to do so). This challenge is particularly important to public cloud usage, which mostly takes place outside of IT control. Here governance means improving controls rather than taking control. In a private cloud setting, it means creating a control framework that encourages, rather than limits, innovative experiments.

Cloud governance relies on paraphernalia in the form of systems, tools and technologies
Paraphernalia are the foundation for cloud governance, yet managed by it
The definition and implementation of policies, plans and performance monitoring as well as processes rely on paraphernalia in the form of systems, tools, and technologies. As with processes, these paraphernalia: underpin governance; for example, IT policies are enforced by rule and/or process engines are subject to EA governance that defines which systems, tools, and technologies to use and how to design and mix them, among other things. The tools are becoming: public cloud-enabled (e.g. Xactium Salesforce.com-hosted GRC requirement management solution or Monitis Amazon EC2-based SOA load testing service) public-cloud service enablers (e.g. PerspecSys salesforce.com Edition Data Governance solution, which enables Swiss banks to use Salesforce.com CRM while retaining sensitive data in-house as required by Swiss law).

The self-service portal is the core cloud governance system


When it comes to both public and private clouds, one of the critical governance-related systems is the self-service portal that provides instant on-demand access to a catalog of services and can either be on-premise or delivered on a software-as-a-service (SaaS) basis. Backed by usage monitoring and chargeback mechanisms, it is the key service provider/consumer interface that delivers product detail and pricing information, enables configuration, handles the ordering process, and manages user access on the basis of governance policies.

84

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

5.4 Cloud governance, like IT governance, is a work in progress


Governance efforts need to improve
Too reactive, technology-centric and piecemeal
Cloud governance suffers from the same flaws that have been affecting other IT governance areas. Overall, IT governance has so far proven: Too reactive: most IT governance efforts are prompted by new regulations or by the need to keep up with uncontrolled SOA software services, virtual machines or public cloud services (whereby governance starts when the public cloud bill is much higher than expected). Too technology-centric rather than focused on business outcomes: the use of public clouds is a good example of this. Despite growing interest in IT transitioning from managing technology to providing technology as a service, neither business nor IT executives have been particularly proactive in managing the various (cultural, policy, process) changes that such a transition requires at all levels (e.g. SOA and cloud governance). Too piecemeal, leading to mismatched and/or overlapping governance: for example, those organizations that have undertaken SOA pilots have implemented SOA governance in parallel to (rather than as part of) ALM governance. This results in separate processes for vetting requirements, quality assurance (QA), release management, and portfolio management, which leads to inconsistency and duplication of resources. The same is happening to cloud computing. Mismatched and/or overlapping governance efforts stem from organizational as well as technological silos, the latter often the result of the former.

Integration standards are a challenge


Technological silos stem from various governance efforts using different badly (if at all) integrated systems and tools. This is true of ALM and ITSM tool integration, for example, despite the fact that both users and vendors have had years to implement it. Some data linkages exist, but the workflows as yet dont. Similarly it will take years for SOA and cloud computing tools to integrate into a federated IT governance architecture. There are no cross-industry standards for exchanging ALM metadata or artifacts, and there probably never will be. On the ITSM side, there is more hope; growing acceptance of the Information Technology Infrastructure Library (ITIL) framework has planted the seed for a mandate to manage golden copies of key IT infrastructure configuration item data within federated configuration management databases (CMDBs). In 2007, most of the main players (BMC, CA, Fujitsu, HP, IBM, and Microsoft) agreed on the need for a specification for federating CMDBs. Unfortunately, the proposal has stalled since then and has still not undergone a standardization process. Obviously, vendors cannot invent demand for integration specifications that currently lack any type of market groundswell, but they can set the stage by at least lowering the technology barriers to integration and raising awareness of the fact that it makes sense to close the loop on application quality and operational performance. The same applies to cloud computing service vendors.

Best-practice frameworks are missing for cloud


IT governance benefits from best-practice frameworks; cloud governance doesnt
IT governance practitioners, backed up by both IT vendors and users, are endeavoring to turn IT governance from an art into a science via the use of best-practice frameworks. Some of these frameworks are IT-specific. They include the very broad Control Objectives for Information and Related Technology (COBIT) framework, launched in 1996, as well as frameworks more limited in scope such as the (Software) Capability Maturity Model, or (SW)CMM, which manages AD processes (although some organizations have applied to ITO processes), and the ITIL, which manages ITO processes.

CHAPTER 5: CLOUD GOVERNANCE: AN OVERVIEW

85

Many governance frameworks are even more limited in scope. For example, a variety of them (FIPS PUB 200, ISM3, ISO/IEC 13335, 15408, 17799 and 27001, IT Baseline Protection Catalogs, NIST 80014) focus on security. Some are generic rather than IT-specific, such as the Balanced Scorecard framework (reused by COBIT), or the Projects in Controlled Environments (PRINCE) framework, applied to project management. Implementing governance frameworks is not that simple. They are evolving quickly. COBIT, for example, has recently acquired two new extensions, Val IT and Risk IT, to respectively help IT investment decisions and mitigate IT risks. Similarly, in 2002 (SW)CMM evolved into Capability Maturity Model Integration (CMMI). They have limitations. Most of them provide guidelines based on how actual enterprises have successfully managed IT, but some (e.g. ITIL) are not prescriptive, and organizations looking for an easy answer will be disappointed: there is no blueprint for implementation. Many are also old-fashioned, ignoring the real complexity of modern IT infrastructures. They have to be adapted to new trends such as SOA and cloud computing. Industry organizations have started to step in, however, and provide either specific guidance (e.g. cloud security guidance from the Cloud Security Alliance) or more generic guidance (e.g. the cloud computing code of practice unveiled in April 2010 by the UK-based Cloud Industry Forum).

Enterprises are struggling to keep up with best-practice frameworks; cloud computing could help
Enterprises are struggling to keep up with the following: The rise of these frameworks: COBIT, CMM, and ITIL have garnered limited support. Other approaches (such as Lean IT) have relatively few reference implementations. The detail of these frameworks: some of these frameworks are so broad (e.g. COBIT, ITIL) that few, if any, organizations have implemented all of their recommendations. Most adopters cherry pick those portions that support specific corporate or IT objectives (such as consolidating the service desk in ITIL) or address key pain points (such as incident management or problem resolution in ITIL). Adaptation to specific circumstance and cherry picking are the right way to go, however, provided organizations do not forget the big picture. The spirit of these frameworks: some IT organizations have used these frameworks in an inwardlooking fashion to define processes and metrics without regards for the bigger picture of business goals, for example. The variety of these frameworks: there are many of them and they overlap and/or complement one another. Vendors add to the complexity with their own twists (e.g. the Microsoft Operations Framework for ITSM). New cloud computing guidance needs to clearly specify how and to what extend cloud governance relates to established frameworks. In return, established frameworks need to evolve to take cloud computing into account, all the more since cloud governance-as-a-service (GaaS) could help in the adoption of these frameworks. As GaaS offerings begin to appear, providers will be able to: start providing feedback on the use of these tools for companies to benchmark themselves against their peers leverage social web functionality for governance practitioners to create IT governance implementation communities make it easier for governance efforts to be shared across LoBs/divisions and between partners.

86

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

5.5 ALM governance needs to expand to the cloud


ALM has migrated to the clouds
ALM is hybrid: it spans public and/or private clouds
ALM is hybrid from two points of view: Development tools and ALM infrastructure services (e.g. business intelligence, collaboration, requirement management, change management services): deployed either on-premise or online or both. For instance, development organizations can conduct project planning in the cloud but test onpremise, or vice versa. The applications being created by the ALM process: deployed either on-premise or online or both. For instance, an application can process data on-premise but store those data online, or vice versa.

Public clouds provide developers with a variety of options


Developers use IaaS platform to run their development tools and/or ALM infrastructure services as well as the applications they create. They use platform-as-a-service (PaaS) development and deployment facilities to quickly develop scalable applications. They also use SaaS development tools and/or infrastructure services as well as integrating the applications they create with SaaS applications. Many SaaS providers have turned the infrastructure on which their SaaS offering runs into a PaaS offering to make the integration process easier.

ALM is for as well as in the public cloud


ALM can be in and/or for the public cloud as follows: ALM for the public cloud: the use of on-premise and/or public-cloud-based ALM tools and/or infrastructure services to develop applications deployed in a public cloud. ALM in the public cloud: the use of public-cloud based ALM tools and infrastructure services to develop applications deployed on-premise and/or in a public cloud. The most likely trend is for vendors, especially large ones, to support both on-premise and online tools (e.g. Microsoft offers an on-premise ALM toolset for its Azure PaaS platform for now, with the plan being to migrate the toolset to Azure).

ALM in the cloud is a mixed bag


Many ALM tool providers have web-enabled their offerings but have yet to fully embrace SaaS. On the one hand, this reflects vendors ability to adapt to customer requirements. For security reasons, some customers do not want multi-tenant tools, for example. Others do not want to adapt their procurement to subscription licensing and are happy with perpetual upfront licenses. On the other hand, it also reflects vendors inability/unwillingness to redesign their tools or to evolve their business model. In any case, tool vendors need to make it easier for developers to take advantage of public cloud services, especially SaaS ones (such as collaboration services). There are more public-cloud-based infrastructure services than public-cloud-based development tools. This is fine, as infrastructure services are more important than development toolsets from an ALM point of view.

CHAPTER 5: CLOUD GOVERNANCE: AN OVERVIEW

87

ALM and cloud governance should be woven together


ALM governance should facilitate ALM for and in the public cloud
Enterprises need to set up a governance framework that helps the AD team pick the right mix of IaaS, PaaS, and SaaS services as well as offerings within each type of service. For example, the use of SaaS infrastructure services could lower the barrier to entry for integrating a high-impact process, such as closing the QA/application performance lifecycle, as integration could be conducted tactically on a modest budget and hopefully with minimal integration headaches if lightweight data services are exposed by source systems. The two sides need to be properly integrated; for example, developers should deploy application code directly from a code management system rather than from development or pre-production storage to avoid version control issues.

ALM governance in the cloud is likely to start with tests


Many enterprises are not ready yet to trust public clouds to run production applications but are more than happy to use them as test platforms. As a result, public cloud test automation and test lab management offerings have been among the most successful public cloud offerings to date. They come with integrated SaaS tools and infrastructure services and/or integrate with on-premise ones (for socalled dev/test offerings). When it comes to cloud-based applications, the cloud is the logical testing environment. The decision is less obvious for on-premise applications. Even IaaS offerings may have difficulties keeping up with the rather complex infrastructures that many large companies have to deal with, or with the requirements of applications with specific requirement in terms of hardware, software, and performance, for example. This limits testing to fairly generic, relatively self-contained web applications. Testing in the cloud is also more relevant to organizations that have relatively sporadic testing needs. Those that have more frequent needs are likely to have already invested in a test environment of their own although, as part of a private cloud effort, this environment can be turned into a pool of resources available to production applications. ALM governance, at this level, makes sure, among other things, that: existing test-related investments are maximized (including turning them into a private pool of internal resources) test data are treated with the security and privacy they deserve (instead of using production data in the cloud) developers are aware of the SLA (functional and non-functional) parameters that underpin the cloud offering. Increasingly dev/test public clouds enable developers and testers to share resources such as best practices, templates, and testing artifacts. This will benefit all parties involved. It also needs to be controlled.

ALM governance in the cloud should start small but think big
Public clouds are unfamiliar territory for many developers. Most enterprises find it easier to start with a small specialist team to adopt ALM in the cloud and/or for the cloud. Starting small and taking it one step at a time is the right way to go. On the other hand, enterprises should not forget that, in the end, ALM in the cloud will make it easier to involve more people in ALM by offering: Centralized, standardized, dynamically scalable ALM infrastructure services to audiences distributed horizontally across geographies as well as vertically across organizational units. The zerodeployment benefits of online tools coupled with monitoring and social web capabilities simplify the task of ensuring that all relevant stakeholders view the status of projects, tasks, processes, and activities that are relevant to them. This in turn makes the process of governance easier by getting everybody involved in the process onto the same (web) page.

88

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Self-contained (development plus deployment) offerings such as the ones provided by small PaaS vendors. These offerings can democratize ALM by making it easier for business or casual developers to get involved. They also lessen the pressure on IT departments by enabling casual developers to create so-called situational applications. They typically feature forms or GUI-based development or mashup capabilities, rather than intensive programmatic or transactional coding.

ALM governance in the cloud is about risk management


It is up to each organization to find the mix of private and public cloud components that fits the risk profile of its ALM activities. Just because you could code in the cloud does not mean you should. There are quite a few reasons why ALM is poorly suited to the cloud; corporate policies regarding the safeguarding of intellectual property or data governance could prevent it, for example. Policies may be selective, however. For instance, while code might be considered intellectual property that cannot be allowed outside the firewall, project plan artifacts, or working with compiled code assembling builds or running performance tests might be acceptable. On the other hand, if your company designs products with embedded software, letting project plans out in the cloud may be an intolerable exposure. Policies will likely vary by organization or industry. Keep in mind that maintaining code inside the firewall is not necessarily safer than doing so externally. Unless client machines are heavily locked down, code inside the firewall could escape when laptops are taken off-premise or copied to flash drives. Internal IT organizations may lack the degree of expertise to truly secure data and assets that organizations that perform hosting for a living have.

ALM governance in the cloud can help weave ALM and ITSM governance together
ALM is expanding slowly to ITSM
When it first emerged, ALM focused on the software development lifecycle (SDLC). The objective was to integrate the various development process steps (e.g. design, development, testing) while ensuring clear separation of concern and normalization between these steps. The concept of ALM then expanded to comprise the entire operating life of an application from the cradle to the grave based on the coordination of ALM and ITSM (as well as ITBM) processes. The reality of ALM has yet to catch up, however. In the on-premise world: Many user organizations have yet to properly implement ALM-as-SDLC. ALMs expansion from ALM-as-SDLC to cradle-to-grave ALM is not even on their radar. Vendors have only started to integrate their ALM-as-SDLC offering and have yet to offer complete cradle-to-grave ALM support. What they offer is limited-point integration between ALM and ITSM offerings.

Public clouds provides an opportunity to speed up the pace


Public cloud services provide both sides with an opportunity to take shortcuts in their move towards cradleto-grave ALM. For example, VMware, via its SpringSource acquisition, mixes Cloud Foundry and Hyperic technologies to support Java cloud deployment and ALM. The integrated offering enables developers to create Amazon IaaS runtime environment without having to deal with the intricacies of Linux scripting and Apache web server configuration. VMware IaaS clouds will follow. Based on application and infrastructure monitoring agents, the offering can automatically recover failed AMIs or database servers within AMIs. The ambition is to integrate these facilities with SpringSources ALM environment. The integration of development, configuration, deployment, and monitoring technologies is one of the key tenets of a new movement (called DevOps) that seeks to bridge the old AD/ITO divide. This movement is closely tied to the emergence of public cloud platforms. The objective is to empower developers to deploy directly to the cloud without involving system administrators, and to facilitate cooperation between developers and system administrators for the latter to be kept in the loop rather than ignored.

CHAPTER 5: CLOUD GOVERNANCE: AN OVERVIEW

89

ALM governance in the cloud must live up to the challenge of cloudcentric applications
Adapting to IaaS and PaaS constraints
ALM governance needs to ensure that developers understand and adapt to IaaS and PaaS application deployment environments from a variety of perspectives, including: Cost: developers need to understand the impact of IaaS and PaaS pricing structures on the software design choices they make. For example, a pricing structure that combines cheap compute resources with not-so-cheap storage resources should be met with applications that are processing-intensive but not overly demanding from a storage point of view (or at least compress data). It should not be the other way around either, as this would go a long way towards neutralizing the cost savings that public clouds are supposed to deliver. Design constraints: IaaS offerings usually provide bare-bone virtualized environments that leave developers on their own devices from a management perspective. They need to code the application in a way that enables it to take control of its environment and scale up to the level they want. PaaS platforms handle much of this housekeeping but constrain application design to specific patterns.

Moving out of ITs comfort zone


IaaS offerings, and to a larger extent PaaS ones, push IT departments out of their comfort zone by promoting a variety of transitions: From relatively simple stateful client-server and N-tier systems to more distributed stateless parallelized applications that relate to the underlying compute infrastructure via asynchronous, persistently queued events. Towards availability and partition tolerance at the expense of consistency: according to the CAP theorem defined by Inktomis Eric Brewer in 2000, only two out of three key distributed system design objectives (consistency, availability, partition tolerance) can be achieved at any one time. Many public clouds reject the stricture of the atomicity, consistency, isolation, durability (ACID) approach in favor of the more laid-back basically available, soft state, eventually consistent (BASE) approach among other techniques such as compensating transactions, ubiquitous caching, and distributed replication. Away from mainstream technologies such as relational databases in favor of newer and older alternatives. Many enterprises are not capable and/or willing to be pushed out of their comfort zone just yet. The responsibility of those in charge of ALM and cloud governance is to make sure that IT teams: do not get out of their comfort zone without being ready for it (and with a strong business case behind the move) do get out of their comfort zone rather than hide in it.

Developing the right SaaS applications


When the application being developed is a SaaS one, the ALM governance process needs to ensure that ALM does the following: Keep up with the SaaS application: traditional on-premise software applications have relatively long upgrade cycles, particularly if they are so complex and poorly documented as to turn any change into a potential risk to the underlying IT infrastructure and/or neighboring applications. SaaS applications, on the other hand, are expected to have short review cycles to adapt to user feedback. This adaptation is key to customer adoption.

90

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Produce a properly designed SaaS application: that is, ensure acceptable costs and QoS levels, ease of integration, relevant functionality, and right design choices. The SaaS approach to functionality is different to that of traditional on-premise applications. The latter focus on feature breadth and depth. SaaS applications focus on the 20% of features used by 80% of users. Design choices are critical at all levels, including infrastructure, interface, and service delivery. Developers need to pay particularly attention to service-delivery-centric design, since they are likely to be unfamiliar with what it entails. They need, for example, to make it easier for users to trial the application or to use various levels of features (which may be linked to various price levels). Service-delivery-centric design also requires developers to build usage-monitoring capabilities into the application.

ALM governance for hybrid applications


Traditional applications are being extended via integration with: an increasing number of SaaS applications new application functionality/services deployed on a PaaS or IaaS platform. The ALM process needs to make sure that these on-premise applications are properly designed to integrate cleanly with these (e.g. no direct or hardcoded connections but loose coupling via well-defined APIs).

5.6 Cloud governance builds on SOA governance


Cloud computing is SOA-centric
Same architecture-centric approach and design principles
Both private and public clouds have an SOA-centric design and the same service delivery-centric approach to IT as SOA. SOAs are neither methodologies nor solutions or technologies, although the concept of SOA was overly linked to web service standards at first, then to lightweight integration middleware packages known as enterprises service buses (ESBs), before being joined at the hip to business process management (BPM) technology. An SOA is the by-product of: an EA-centric approach to software AD and management the application of old (but good) software engineering principles (abstraction, loose binding/coupling, and separation of concerns) to software AD and management. SOA initiatives take a higher-level perspective on splitting up software than older componentization efforts by moving: from individual applications to overall architecture, making fewer assumptions on how things should be designed from the notion of software to that of service, whereby the focus is not so much on coding software components but on defining their purpose, delivered via the provision of contact details and capabilities information in the context of policies.

Present at both the SOI and SOA levels


There are two main levels of services in an SOA: The platform service level, also known as service-oriented infrastructure (SOI), delivers the hardware and software foundation (server, network, database, operating system, clustering/grid/virtualization technologies, etc.) on which software components run but are abstracted from.

CHAPTER 5: CLOUD GOVERNANCE: AN OVERVIEW

91

The application/process service level consists of software-only services, provided by software components to one another. These are the types of service that most people refer to when talking about SOA services. Public clouds relate to SOA at both: the platform service/SOI level: IaaS and PaaS runtime platforms have been designed on the basis of SOA engineering principles the application/process (SOA) service level: many SaaS applications have been designed on the basis of SOA engineering principles. Similarly, private clouds are expected to follow an SOA approach at both levels.

Cloud governance helps SOA governance


Cloud governance strengthens the need for SOA governance
The notion of governance is central to an SOA. Cloud computing makes it even more so. The objective is to sustain and guarantee SLA parameters at two levels. Service-centric software design: between software service consumers and providers. At this level, SOA promotes ALM governance with the view to deliver properly designed software services. There are several differences between traditional applications and SOA software services. For example, traditional applications are not usually designed for reuse and are deployed one version at a time (while the development and deployment infrastructure that underpins SOA software services supports versioning, as different classes of service consumers might be entitled to different versions of a service). However, instead of focusing on the differences, an alignment of SOA and ALM governance should focus on the way ALM and SOA development processes can leverage, and learn from, one another. This means putting all development projects in an architecture context and coordinating ALM and SOA lifecycle management. Service-centric software delivery: between IT as a service provider and the business as an IT service consumer. At this level, SOA promotes IT governance with the view to deliver properly costed, usable, and reliable software services. The irony is that SOA, meant to enable IT to slice through organizational as well as technology silos, has found itself a prisoner of such silos and as a result disconnected from IT and ALM governance efforts. The objective of any SOA governance efforts must be a reconnection. As part of a federated approach to IT governance, the introduction of cloud governance provides a good opportunity for enterprises to do so, and map: their cloud governance efforts with their SOA governance initiatives: because they are SOA-centric at both the platform service/SOI level and the application/process (SOA) service level, public clouds promote the development of good SOA practices their SOA and ALM governance efforts with one another: instead of approaching SOA software service development as an island, public clouds provide IT governance with an opportunity to coordinate ALM and SOA development efforts.

Cloud computing governance helps SOA transitions


In the past few years, SOA initiatives have been going through a variety of transitions: From building (software components and their underlying SOI) to composing (software component together into applications and processes): it is at this level that SOA meets BPM for heavyweight processes and Web 2.0 mashups for lightweight user-interface level combinations of software (and data) components. From development of software services to management of said services: this transition is overly technology-centric. In the same way it was once believed that the ESB makes the SOA, it is now too often believed that SOA registry and repository respectively make design-time governance (policies definition) and runtime governance (realtime policy enforcement). SOA governance is not that simple.

92

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

From bottom up (technology) to top down (business): this transition is not so much about moving from one perspective to the other as it is about linking the two together. SOAs are enterprise-level crossfunctional initiatives that span multiple LoBs as well as IT domains. They need to involve both business and IT so that SOA software component policies reflect enterprise-level as well as department-level business policies. From using internal software services to using external services: SOA has always had an external business-to-business aspect, although most SOA efforts have so far been internal application-toapplication efforts. From service-centric software design to service-centric software delivery: while an SOA and the software development process that underpins it do not on their own turn IT into a service-centric software delivery-driven organization any more than a chisel and hammer turn masons into sculptors, they make the process easier. SOA governance ensures the successful implementation of every one of these transitions. This means, for example, making sure that the AD team: does not focus on reuse at the expense of the overall architecture perspective; significant retraining and/or participation in mixed teams with architectural mentoring is essential to avoid an overly development-centric approach to SOA does not dive too quickly into process/BPM issues (an approach that requires too many details too early), instead of dealing with business outcomes first. Cloud computing helps enterprises manage SOA transitions: From SOI to SOA: IaaS and PaaS offerings, as well as private cloud platforms, do not so much expand the choice of SOIs as provide an SOI shortcut, allowing enterprises to focus on the delivery of software services. From building to composing: PaaS and SaaS public cloud offerings provide ready-to-compose building blocks (via a self-service portal/catalog targeted at developers). From service-centric software design to service-centric software delivery (via a self-service portal/catalog targeted at end users). From bottom up (technology) to top down (business): actually in this instance, it is more a case of SOA governance helping cloud computing governance, as cloud computing initiatives are currently too technology-centric. From using internal software services to using external services: public cloud services provide SOA initiatives with an easy way to reach out to third-party software components. However, public cloud offerings are still rather immature and not quite ready to fully support SOAs shift from development to management.

SOA governance paraphernalia have yet to move to the cloud


SOA relies on design time and runtime infrastructure and tools to define and enforce the policies behind the SLA parameters than underpin both types of consumer/provider relationship. Because service consumers are not necessarily known in advance, the runtime infrastructure is particularly important in providing a layer of abstraction that enables SOA software service providers to adapt to the demands of new consumers. It is also important in that it is responsible for the realtime enforcement of these policies. It consists of a variety of tools, such as metadata catalogs, asset tracking, and system management tools. The star of the SOA runtime show is the repository, responsible for management of the software service lifecycle and of the dependencies between software services and SOA policies. While the transition of ALM tools and infrastructure services to public clouds has started, the transition of SOA tools and infrastructure services (not just their deployment on a public cloud but their ability to take advantage of cloud capabilities) has barely begun. Public cloud service vendors have yet to provide complete SOA infrastructures that enable developers to discover and view their software services and associated policies. Design-time policies cannot be enforced in a cloud runtime. Federation or synchronization standards between internal and public-cloud-based registries/repositories are lacking. Even if they were available there would still be the challenge of mapping internal and public cloud SOA policies. Cloud governance needs to federate with IT/SOA governance to manage these difficulties.

CHAPTER 5: CLOUD GOVERNANCE: AN OVERVIEW

93

Interfaces are the link between SOA and cloud


APIs to services are at least as important to SOAs as SOA services
SOA software services providers make their features available to service consumers via application programming interfaces (APIs) that define the following: what is available: not all SOA services are made available as service providers and those that are do not necessarily make all of their features available who it is available to: security policies may drive the enterprise to expose some services to only a select set of users how to get it: which protocol and semantics to use to communicate and structure a service request. APIs provide a layer of abstraction between service providers and consumers. They can provide services that mediate between the two sides based on a number of technologies, from simple script/look-up tables to service brokers. From an SOA point of view, API design is as important, if not more, than the design of the SOA service itself. Good API design principles include technology neutrality, use of standards, ease of discovery and invocation.

APIs are the keys to the cloud computing kingdom


Private and public cloud services are available via either a GUI (for human consumers) or an API (for software consumers). For example, public cloud APIs are used: in a IaaS environment to provision, start, stop, and terminate virtual machines, manage virtual machine metadata and related services in a PaaS environment to deploy application instances in a SaaS environment to access application functionality and data. Public cloud service consumption is ahead of human consumption. Salesforce.com, for example, gets more than 60% of its traffic via APIs, and the percentage is likely to increase as more devices take advantage of cloud services. The number and complexity of these APIs is growing, with many public cloud vendors trying to turn their APIs into de facto standards leading to the creation of a new acronym: YACA (yet another cloud API). The partner ecosystem built around these APIs is also growing (along with the need for public cloud providers to manage this ecosystem). Some public cloud providers are promoting multi-cloud APIs such as Deltacloud, jclouds for Java developers or libcloud for Python developers. IBM and Microsoft support the Simple API project launched in September 2009 by Zend Technologies to make it easier for applications running on the Amazon IaaS offering to migrate to their own cloud (or between their public clouds and private clouds).

APIs are important to both service providers and consumers


Public cloud APIs need much stronger governance than internal APIs. Public cloud service providers are increasingly aware that their public cloud offering is only as strong as its weakest API. They have started to turn to a number of vendors that help them manage their APIs in a way that extends the appeal of these APIs to a wider audience, thus helping generate more revenue. These vendors can also help enterprises turn from public cloud service consumers to public cloud service providers. A few companies have already made this move, and we expect many more to follow. API management tools enable them to keep up with the following: API traffic and usage to make sure, for example, that big customers have priority over smaller ones (or have access to different APIs), that APIs do not turn into performance bottlenecks, and that the most used APIs get the biggest share of investment API versioning to help partners and customers move to new versions if and when necessary API mediation/transformation to keep API numbers at a manageable level API requirements of end points

94

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

API policy definition and enforcement API security: APIs are one for the first ports of call for hackers. API management tools need to be backed by strong governance, as API design and management impact both business and IT at various levels. For example, governance support is required to define the context of: API monetization: service providers are currently testing a variety of business models and go-tomarket strategies that mix free basic APIs and paid-for enterprise APIs on the basis of a variety of parameters, such as who is using the API and how many requests are sent to the API, as well as a variety of licensing and payment conditions. API openness and constraints: an increasing number of APIs are open (well-designed and documented, available via self-service). Many open APIs only request developers to give attribution (e.g. Raskpace API). Others are more demanding (Google API prevents reverse engineering, for example). API adoption and community building efforts.

5.7 Recommendations
Recommendations for enterprises
Cloud computing requires governance, but slowly does it
You need a governance framework to benefit from, and adapt to, cloud computing. There is no big bang implementation to cloud governance, but a gradual build-up that provides an opportunity to launch and/or reinvigorate other governance efforts. The objective is to build a positive feedback loop between various governance domains as part of a wider IT federation governance effort.

Move towards increasingly shared and service-centric IT


Cloud computing is one driver, among others, towards increasingly shared and service-centric IT. Service-centric IT is not the end but the means that underpins a top-down approach to IT focused on business scenarios and outcomes rather than a bottom-up approach focused on technology and vendor stacks. This is particularly relevant to cloud governance, where technology considerations often take precedence over business concerns.

Cloud governance is in its infancy


Cloud governance best practices and underlying tools are in their infancy, so you need to cast your net wide and learn from those ahead of you on the learning curve, and coordinate with vendors to build up these best practices and tools.

Expand ALM and SOA governance to the cloud


Developers have taken to the cloud like fish to water, but their enthusiasm needs to be tempered and controlled by weaving cloud governance into ALM and SOA governance. From an ALM point of view, this will make it easier to connect ALM and ITSM governance. From an SOA point of view, it will bring to the fore the need to understand and manage cloud APIs.

Recommendations for vendors


Help users governance efforts
When outsourcing started and enterprises did not have a clue about how to manage it from a governance point of view, the outsourcing service provider had to step in to facilitate the process of communication between the business and IT as well as between ITBM, operations, and development groups. They did not take over governance but in many cases were forced into a commanding role due to the absence of managed communication lines and objectives. Similarly many cloud service and tool providers need to step in and help enterprises define and implement cloud governance.

CHAPTER 5: CLOUD GOVERNANCE: AN OVERVIEW

95

Boost best practice, tools and integration


Besides helping users define best practices and tools, you need to work on standards to integrate cloud governance tools with one another and other governance systems.

Weave cloud governance into your ALM and SOA governance tools
To avoid cloud governance becoming yet another governance silo, ALM and SOA governance tool providers should adapt their offering to the requirements of both private and public clouds.

Collaboration and testing are the cloud ALM sweet spots


Team-related activities and testing are well suited for the cloud. On the other hand, there is still a question mark over coding in the cloud, not just because of corporate policies regarding the safeguarding of intellectual property but also owing to the commoditization of integrated development environments and other developer client-related tools such as debuggers, unit testing tools, and buildscripting tools.

API management is the new cloud SOA frontier


Both public and private cloud services need to be accessible via clearly defined APIs. The management of these APIs from a cost and QoS perspective is becoming an increasing challenge that service providers, both inside and outside the enterprise, need to live up to.

96

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Incorporating

IT Management and Strategy Report

OVUM Butler Group

CHAPTER 6: Public clouds require IT service management to adapt

WWW.OVUM.COM

6.1 Summary
Catalyst
As the adoption of public cloud services becomes more prevalent, IT organizations face a variety of new IT service management (ITSM) challenges. Not only will they need to ensure that existing policies, processes, procedures, and supporting technologies are fit for externally delivered IT services, but also that the IT function remains relevant to the business as an effective part of the IT service delivery chain.

Ovum view
ITSM is key to the adoption of cloud computing
CIOs will need to leverage ITSM people, processes, tools, and techniques to understand which IT services would be better served from private and/or public clouds and the priority for migration from private to public (and vice versa) based on business pain points, internal capabilities, the existing quality of service (QoS), and the cost of provision, with the aim of providing better customer service and improved productivity.

ITSM skills are key to managing public cloud service delivery


From a people perspective, there is no doubt that IT organizations will need to evolve to reflect the change in focus caused by the externalization and loss of immediate management of some IT infrastructure and IT services. However, even in an IT organization that migrates all IT services to public clouds there will still be a need for IT or business resource to manage service delivery using bestpractice ITSM processes.

ITIL v3 service strategy disciplines are important when optimizing public cloud usage
Within the Information Technology Infrastructure Library (ITIL) v3 service strategy area, the processes of financial management, service portfolio management, and demand management will become more important, as IT service costs are driven by the level of public (as well as private) cloud service consumption. Enterprise need to get their IT financial management (ITFM: the function and processes responsible for managing an IT service providers budgeting, accounting, and charging requirements) and service-level management (SLM the process of maintaining and gradually improving businessaligned service quality) act together prior to moving to public clouds. Following service migration (to public clouds), there is potential for IT functions to lose both visibility and control of IT services within the cloud. ITFM and SLM will again both be key ITSM capabilities, as will ramped-up supplier management capabilities, as IT organizations endeavor to manage a blended mix of on-premise and cloud-based IT service delivery.

ITIL v3 service design elements will be needed for public cloud service delivery
IT service continuity and information security will become higher-profile ITSM activities in light of business concerns related to public clouds. Availability and capacity management will still be relevant, with the former potentially posing a big risk to be closely managed and the latter hopefully being simplified through public clouds ability to adjust capacity as needed. Service level, service catalog, and supplier management will also be critical people activities in ensuring that public-cloud-based IT services deliver on their promises.

CHAPTER 6: PUBLIC CLOUDS REQUIRE IT SERVICE MANAGEMENT TO ADAPT

99

ITIL v3 service catalog management is particularly relevant to private cloud service delivery
The ITIL v3-espoused service catalog management process and enabling technology can be used not only for the design and costing of public-cloud-delivered services but also for private cloud self-service provisioning, supporting the cloud ethos of agility and cost-efficiency. By defining a service catalog with a menu of standard service options, policy-governed self-service, and other key service management capabilities (such as pricing and usage tracking), IT organizations will be better able to manage private cloud service delivery.

IT organizations need to be thinking now about the best approach to adding tools for managing a hybrid data center environment
Niche cloud management vendors have already emerged with solutions to address a wide spectrum of private and public cloud management needs and, in the same way that traditional systems management vendors have added virtualization capabilities to existing tools, so they will do with cloudbased capabilities. There is also a third tool possibility, where third-party vendors, spotting a gap in the market, will create solutions that allow cloud management vendor tools and systems management vendor tools to operate in their non-native environments.

ITSM practitioners should consider SaaS management tools.


IT organizations should not overlook a cloud opportunity that exists closer to home: moving the corporate IT service desk tool outside of the firewall. Software-as-a-service (SaaS)-delivery is a good fit for ITSM tools and, with the continuing influx of SaaS-delivered solutions from SaaS-only vendors, traditionally on-premise ITSM vendors, and new dual-play vendors, SaaS-delivered ITSM tools will continue to erode the market dominance of their on-premise siblings.

Key messages
Public clouds are changing the IT function. Public clouds are changing the ITSM landscape. ITSM technology has a big role to play in managing public clouds.

6.2 Public clouds are changing the IT function


The IT function is dead, long live the IT function
Public clouds will not kill IT departments
Some schools of thought predict the demise of the traditional internal IT function with the adoption of public clouds, based on the argument that IT capabilities and skills will become redundant as the IT environment is pushed outside of the corporate firewall. The reports of the death of the IT function are both a little early and, in Ovums opinion, misguided. The move to the cloud will not happen overnight. Vendor hype is still way ahead of corporate adoption, meaning that traditional IT people and skills will still be needed for some time yet and, even if public clouds become the main delivery channels, they will change the IT function but not eliminate it. Even with SaaS, where everything is pretty much handled by the service provider, IT will still have to do configuration and change management. In fact, change will come much more often at least two upgrades a year versus one upgrade every 18 months for onpremise software.

100

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Cloud computing will be hybrid


It will not be all or nothing with cloud adoption: the reality will be organizations delivering IT services via a blended mix of on-premise and public cloud-hosted applications, with internally delivered IT services requiring traditional IT people skills and capabilities. Externally provided IT services will still need internal IT infrastructure to be delivered to end users within an enterprise.

Public clouds change IT departments


Current IT organizations can be poorly structured
The traditionally organized IT department structure, which has evolved based on the use of siloed tools and is often characterized by many different siloed teams of technical specialists, has also driven the technology selection process within organizations to a large extent governed by the existing skills within the organizations IT department. This approach has created tensions between the requirements of the business users and the ability of the IT department to manage the technology in such a way that IT resources are locked into technologies and organizations face expensive retraining or new hiring costs if technologies new to the organization are selected.

IT organization structure needs to become more matrix-like


Beyond the individual, Figure 6.2.1 shows Ovums view about how IT departments will change as firstly virtualization and now cloud technologies become more widely adopted.

Corporate Strategy Executive Management CIO

Business Unit Head of Department

IT Management

Business Specialists

Functional Specialists

Financial Analysis

Technical Specialists

Outsourced or Service Providers


Figure 6.2.1: Impact on the IT organization

Java Development

IT Department

IT Strategy

Project Mgmt.

Source: Ovum

This structure might appear at first glance to be replacing one set of silos with another, with less clearly defined roles and responsibilities, but it is nonetheless likely to be the more generally adopted approach, with a number of iterations to refine the model over time, because the implications of the proposed change are more fundamental and far-reaching than they first appear. The principle behind the model is that organizational responsibility will become more matrix-like in its construction, translating into employees wearing more than one hat.

CHAPTER 6: PUBLIC CLOUDS REQUIRE IT SERVICE MANAGEMENT TO ADAPT

101

This model promotes specialization, but only in areas that need a particular focus
This might give the impression that the model is nothing more than a de-skilling of IT, effectively making everybody a generalist. However, on closer inspection, the model actually promotes specialization, but only in areas that need a particular focus. It also supports the concept of overlapping responsibility so that demarcation, or gaps in coverage, becomes less of an issue. It will necessitate the creation of new responsible, accountable, consulted, informed (RACI) charts for roles, potentially extending those created for an outsourcing model. At a basic level, this might be the creation of a specific commercial team to manage outsourced providers with a specific focus on actual service delivery, contractual issues (often with differing interpretations), the financial implications of changes, and the application of service credits (penalties) where appropriate. With more complex service-provision scenarios, the relatively new concept of service integration could be applicable across multiple suppliers. This is based on the fact that, while IT outsourcing can deliver significant cost savings and improve an organizations ability to compete, only around 5060% of outsourcing partnerships are thought to be successful. To succeed with outsourcing, IT organizations need a robust IT governance and service management structure between the service recipient and service providers for service integration. This should be implemented at the outset of any partnership, with an estimated 80% of the causes of failure attributed to governance issues. Consultancy organizations such as Capgemini, IBM, and Tata Consultancy Services are now providing both service integration advice and services. We believe that some organizations will adapt to cloud computing based on the self-selecting team principle where the concept of management is reduced in scope and leaders attract team members based on merit and the strength of their proposal and that IT teams will become more fluid, potentially changing dynamically as both supply and demand change over time. The transition to this approach to IT structure will happen organically as IT leaders recognize the merits of individual team members and their pertinent skills. It will require IT leadership to focus more closely on individual development and cross-team capability reviews to ensure that skills consistently meet the requirements for a potentially changing set of needs. This fluidity of operation and the emphasis on strong individuals performing within flexible teams will also necessitate a change in the way that both personal and team performance is recognized and rewarded by IT.

6.3 Public clouds are changing the ITSM landscape


Some ITSM issues are traditional, others are new
Quality of service and governance rules apply to public clouds
From an overall IT governance perspective, external cloud services need to be managed as tightly as internally delivered services in terms of risk, security, and standards compliance. IT services will still need responsible internal owners no matter how they are delivered (as with an outsourcing model). An IT organization needs to have a good understanding of the IT services it provides, along with the servicedelivery quality levels required and the service-level agreement (SLA) targets agreed with the business.

People issues related to public clouds need to be addressed


Moving IT services, or more specifically the delivery of IT services, to public clouds is not just a technology issue for IT service managers. They also need to manage the cultural implications of this IT and business change. At minimum there are end-user education needs to fulfill, and the potential for non-conformance needs to be managed through a mixture of communication, education, training, and senior management support. Business perceptions of the potential for degradation of service in terms of availability, speed of response, and security also need to be managed, with a special focus on data governance issues, which are currently among the top business concerns when it comes to public clouds.

Public clouds ease of procurement is both a blessing and curse


The ease with which public-cloud-delivered IT services can be procured is both a benefit and a curse. Many are procured directly via lines of business (LoBs) rather than via an internal IT function governance gateway. This stores up service-management-related issues further down the line as the LoBs start to complain about business-critical issues related to cloud-delivered IT services that the IT function knows nothing about.

102

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Public clouds require the same ITIL-espoused ITSM disciplines as internal IT


ITIL is still very much relevant but needs to evolve
While ITIL v3 proposes the processes IT organizations require to effectively manage service delivery, whether it be from an on-premise private cloud or a public cloud perspective, its adoption to date (with true organizational commitment to the concepts of IT as a service and of the service lifecycle) has been somewhat patchy. As with ITIL v2, the mantra of adopting and adapting the ITIL processes that deliver the greatest business benefit still holds true, with ITIL process adoption prioritized based on a combination of immediate IT and business pain points and needs. However, with the advent of public cloud computing Ovum expects the need for certain ITIL processes and associated people skills and capabilities to come to the fore.

Understanding and applying the service lifecycle is critical


Understanding and applying the service lifecycle is critical, with IT staff viewing and managing technology provision as discrete IT services aimed at consistently meeting business needs for the technology enablement of business processes. SLM (the process of maintaining and gradually improving business-aligned service quality) is also vital, as the ability to effectively manage service delivery from a third party will become paramount from a business continuity perspective. While the concept of delivering IT as a service was key within earlier versions of ITIL, ITIL v3 introduces the concept of the service lifecycle as the backbone of IT service delivery, with service design, service transition, and service operation as lifecycle phases for change and transformation, service strategy for policies and objectives, and continual service improvement for learning and improvement (see Figure 6.3.1).

Continual Service Improvement

Service Design

Service Strategy

Service Operation

ITIL
Service Transition

Figure 6.3.1: OGC/ITIL service lifecycle

Co

To elaborate, the Office of Government Commerce (OGC) Service Strategy book offers the following description of the service lifecycle model:

ce rvi Se nt al e nu vem nti Co mpro I

nt Im inua pro l S ve erv me ice nt

Source: OGC

CHAPTER 6: PUBLIC CLOUDS REQUIRE IT SERVICE MANAGEMENT TO ADAPT

103

Service Strategy is the axis around which the lifecycle rotates. Service Design, Service Transition, and Service Operation implement strategy. Continual Service Improvement helps place and prioritize improvement programs and projects based on strategic objectives. An important point to note is that the service lifecycle is driven by strategy, not operational expediency. There is no doubt that IT organizations will need to evolve to reflect the change in focus caused by the externalization and loss of immediate management of some IT infrastructure and IT services. However, even in an IT organization that migrates all IT services to public clouds there will still be a need for IT resources to manage service delivery using best-practice ITSM processes, and for systems administrators and networking professionals to acquire new skills for the new complexities of IT service delivery that public clouds will bring.

Service strategy: financial, service portfolio, and demand management focus


Within service strategy, the processes of financial management, service portfolio management, and demand management will become more important as IT service costs are driven by the level of service consumption, with the processes used for: creating the business case for migration to the cloud and ongoing cost control assessing delivered business value relative to the costs of cloud-delivered IT service provision balancing supply and demand.

Service design: required at all levels


Within service design, IT service continuity and information security will be high-profile ITSM activities in light of business concerns over public-cloud-based IT service delivery. Availability and capacity management will still be relevant, with the former posing a big risk that will need to be closely managed, while the latter will hopefully be simplified through the cloud-based ability to adjust capacity as needed. Service level, service catalog, and supplier management will all be critical people activities in ensuring that public-cloud-based IT services deliver on their promises. Continual service improvement will also continue to be a formal and people-intensive mechanism to both optimize IT operations (ITO) and improve upon the quality of IT service delivery.

Service transition: change management to the fore


Within service transition, change, release and deployment, and service asset and configuration processes will still be applicable and require people resource. The complexity and importance of change management related to public clouds should not be underestimated. Given the issues generally experienced in-house, the addition of extra variables will increase the potential for failure and may adversely affect the ability to quick-fix. From an asset management perspective, software license management activity in particular is required to ensure that an enterprise can meet its governance and contractual requirements.

Service operations: business as usual


Within service operations, there will be the need for a people-based service desk (even with effective self-service facilities) and incident management resource. Event management and problem management might be highly automated (if not eliminated) by the self-healing aspects of public cloud policy and orchestration engines, but both will still need some level of human resource to be applied, if only at a management level.

Relationship management becomes critical


External relationship management needs to improve
Improved supplier management capabilities will be key, with supplier strategies, policies, evaluations, contracts, and performance monitoring all needed in a public cloud world. While these and other ITIL processes are mostly referred to here from a people-capabilities perspective, the same is also true from the point of view of designing, deploying, and consistently operating an effective process.

104

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Internal relationship management also increases in importance


Relationship management will also be a key capability with public-cloud-delivered IT services, and not just from an ITIL v3 business relationship management perspective (a business relationship manager role was introduced in ITIL v3). ITIL v3 business relationship managers should act as liaisons between IT and the business, leveraging significant knowledge in subject matters pertaining to both. They are specifically responsible for understanding the business and its needs, assisting in the prioritization of IT-related projects, and directing IT strategy in support of business strategy. At a granular level, business relationship managers should work with all levels within the business, from day-to-day operations to strategic planning, to ensure that the right services are delivered at the right price to meet business needs. Their primary goal is to build a true partnership between IT and the business (most likely at a business-unit level), providing the business with the opportunity to help shape the IT services delivered. As IT becomes more of a conduit between external providers and internal customers and consumers, these softer skills will become increasingly important for the majority of IT staff, as they need to work more closely with the business and its component business units or functions.

Financial management puts public clouds in context


In Ovums opinion, IT has traditionally been poor at ITFM (the function and processes responsible for managing an IT service providers budgeting, accounting, and charging requirements), particularly the use of service costing to understand the true cost of service provision and the associated cost drivers (as per the service costing overview shown in Figure 6.3.2). Public-cloud-based service delivery will alleviate this somewhat, but in many ways it is just a moving of the goalposts, with consumption-based charging meaning that understanding and managing the costs associated with IT service consumption will be a key task within the IT organization. An IT organization will need to know enough about service costs, quality, and business value to make informed decisions as to whether a public cloud is the best available delivery option.

COSTS
Hardware Software Staff Facilities External services Transfer

Cost elements

Direct costs

Indirect costs

Unabsorbed indirect costs

Costs-byservice

Service A
Figure 6.3.2: Service costing overview

Service B

Service C

Service D

Source: Ovum

CHAPTER 6: PUBLIC CLOUDS REQUIRE IT SERVICE MANAGEMENT TO ADAPT

105

People policies will need to reflect the risk of IT service procurement within the cloud
Public clouds make it easier for business users to slip under the IT governance radar
While public clouds are often touted as offering a wealth of benefits and opportunities to both IT and the business, they also create the potential for business people to circumvent the IT department, directly procuring public-cloud-based services to meet ever-pressing business needs for rapid technology enablement. Whether it is due to poor IT-to-business relations, a need for speed that IT is perceived to be unable to deliver against, the relative costs of internal IT to external provision, naivety, or just maverick behavior, there will undoubtedly be instances of business people considering or even contractually engaging with cloud-based service providers directly. While to many it may seem farfetched, most IT service managers will tell war stories of business functions using local budgets to independently procure the design and development of a business-enabling application with the first the IT function knows about it being when the business function expects them to deliver it.

Unofficial public clouds will quickly make themselves known to IT when issues occur
Public-cloud-based service provision both simplifies and extends the severity of such a scenario. Not only can the application be designed outside the control or guidance of the IT organization, it also now has the potential to be delivered. The IT function will be blissfully unaware of the situation and the relevant business people will probably be happy with their decision to go it alone until something goes wrong. The potential consequences are as follows: The procured IT service cannot be consumed behind the corporate firewall. The IT service doesnt deliver against the stated business requirements in terms of functionality, capacity, availability, or performance. Required changes to the service are not possible, financially restrictive, or both. The service provider goes into administration or just disappears. The promised levels of security are not provided or dont meet corporate IT governance requirements upon auditing. When business people complain to the IT service desk, as the single point of contact for IT failures, and expect it to be able to resolve the issue immediately, the IT organization may have no idea that the service exists let alone what it is, the business process it supports, the third-party vendor used, the SLAs involved, and the escalation procedures required to restore service as usual as soon as possible. Mayhem will follow and no doubt the finger of blame will at least initially be pointed at the IT organization.

IT organizations need to get a grip on public cloud procurement


IT organizations need to act now to help prevent this scenario. The IT function needs to work with the business to ensure that people are fully aware that the only way to procure IT services is via the IT function, and of the reasons why. Beyond communicating with and educating people, however, the IT function must also ensure that it has worked with human resources to ensure that disciplinary procedures are in place so that people are both dissuaded from and penalized for procuring IT services outside of the business-agreed IT route. From a carrot perspective, a key objective is to shorten and simplify provisioning processes so that people have fewer reasons for bypassing these processes.

106

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

While this may seem draconian, an enterprise cannot afford to be placed at risk by maverick or even misguided individuals endangering business operations by independently procuring IT services without the necessary level of IT governance applied by experts within the IT function.

6.4 ITSM technology has a big role to play in managing public clouds
Managing service availability will be more complex in the cloud
With public clouds, service availability can no longer be focused on hardware
By the very nature of the public cloud and its make-up, the ability to manage service availability can no longer be focused on the particular pieces of hardware that support the critical business services. This not only applies to infrastructure-as-a-service (IaaS), it also applies to platform-as-a-service (PaaS) and SaaS, which add their own particular twists to the situation. From a service availability perspective, public-clouddelivered services can be considered far more complex than traditional IT infrastructure in that a workload might move across servers, whereas with traditional on-premise delivery a service could be broken down into its static technology components, or configuration items, and be managed from a service availability perspective from the bottom up, with greater ITSM priority or emphasis placed on the hardware that supports critical business services.

Service availability should be measured end-to-end rather than limited to the data center that provides cloud services
This complexity extends beyond the public cloud, however. As discussed earlier, IT organizations will be faced with the need to manage the availability of IT services delivered both in-house and via public clouds (or any hybrid in between). Cloud vendors will offer service-level-based information to enterprises but, especially early on in the customer-service provider relationship, an IT organization may wish to have a greater level of control and visibility over end-to-end IT service delivery.

New tools are emerging to cater for public clouds management needs
Niche cloud management vendors have already emerged with solutions to address a wide spectrum of public cloud management needs for early adopters. For enterprise IT organizations that already use traditional IT management products, however, are they going to want to add more tools to what is already probably an overpopulated tool bag? Just as traditional systems management vendors have added virtualization capabilities to existing tools, so they will do the same with public cloud-centric capabilities. However, will the cloud management vendors add traditional systems management functionality to their solutions, or are they better off being niche players continuing to offer potentially better capabilities than the consolidated approach of traditional systems management vendors? There is also a third possibility where third-party vendors, spotting a gap in the market, will create solutions that allow cloud management vendor and systems management vendor tools to operate in their non-native environments. All three vendor types will potentially offer fit-for-purpose solutions to enterprises, but it is still too early to tell whether a particular approach will prevail over the others. No matter what the future holds for these vendors, IT organizations will need to decide upon the best approach to managing a hybrid data center environment, a decision that needs to be considered and undertaken as part of the overall planning and costing of moving IT services to public clouds. The cost of new management tools will need to be factored in when calculating the total cost of ownership of publiccloud-based IT service provision. In many ways, an IT organizations experiences of and lessons learned from managing within a virtualized environment will be a stepping stone to managing service availability with IaaS public clouds.

CHAPTER 6: PUBLIC CLOUDS REQUIRE IT SERVICE MANAGEMENT TO ADAPT

107

Service catalogs can be leveraged for cloud provisioning


Service catalogs can help optimize IT provisioning
Service catalogs were very much in vogue during 2009 and continue to be so, given that appropriate service catalog management activity and back-end technology can help to optimize the traditionally people-resource-intensive process of request fulfillment at both the requirements capture and provisioning stages. Employee self-service takes the burden and cost of ordering away from the service desk (allowing staff to concentrate on greater value-add activities) and can offer significant reductions in the total cost of service request handling/provisioning (by as much as 75%) through the use of workflow and automation for authorization and service provisioning. In response to the changing ITdelivery landscape, service catalog vendors have, or are now extending, solution capabilities to cater for public cloud environments.

Service catalogs are key to the cloud


The service catalog management process and enabling technology should be used not only for the design and costing of public-cloud-delivered services but also for internal self-service private cloud service provisioning, supporting the cloud ethos of agility and cost-efficiency. Ovum fully expects IT organizations to address many of the ITSM challenges offered by public cloud adoption by defining their own service catalog with a menu of standard service options, policy-governed self-service, and other key service management capabilities such as pricing and usage tracking.

Service catalogs focus the enterprise on what it really needs


Through a service catalog front-office solution, IT organizations can better deliver against the utilitybased concept of cloud-based service delivery, where IT services are turned on and off as required, in many ways placing the onus on the consumers of said services to commence and cease services as needed, with the specter of being charged (at a business-unit level) for services that are not being used looming large as an incentive.

Consider the management of the end-user interface and experience


Much is made of moving internal-customer IT services to public clouds (based on a well-considered business case). However, IT organizations should not overlook an opportunity that exists closer to home: moving the corporate ITSM tool or other key IT management tools outside of the firewall.

The market for SaaS ITSM tools is growing


The market penetration of SaaS ITSM tools is still extremely low compared with on-premise tools. It is, however, a rapidly growing market segment both in terms of customer interest and vendor entry (or reentry for traditional vendors offering a SaaS version of existing tools).

Increasing corporate familiarity with SaaS makes it easier for SaaS tools
SaaS vendors are not just riding on the back of the global recession, as the benefits are not limited to value for money. There are also the ease of use and ease of version upgrade, but one should not forget the benefits of removing the operation and support of what is essentially a non-business-critical IT system from the often already overrun internal IT organization, allowing it to focus on greater value-add activities.

Companies see SaaS ITSM tools as a cheaper, low-risk option


An oft-quoted industry figure is that, on average, organizations change their ITSM tool every five years, and while the effect of the downturn of the economy on this statistic is unknown, Ovum is seeing greater attention placed on the overall cost of providing ITSM-related services and increased consideration made before an organization will commit to the cost and inconvenience of upgrading an existing ITSM tool. Vendors are also reporting that customers are still readily switching between ITSM tools archiving rather than porting historical transactional data across with some clients viewing SaaS as a low-risk, potentially tactical solution in the knowledge that they can readily return to an on-premise solution (with the existing vendor or an alternative) if needed.

108

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

SaaS is a good fit for ITSM tools


Overall SaaS delivery is a good fit for ITSM tools, especially given that, with a phased implementation, it can be a relatively low-risk environment for introducing both SaaS and ITSM into an organization. Enterprises are now far more aware of the benefits and risks of SaaS and, with the continuing influx of SaaS-delivered solutions from SaaS-only vendors, traditionally on-premise ITSM vendors, and new dual-play vendors, SaaS-delivered ITSM tools will continue to continue to erode the market dominance of their on-premise siblings. Such solutions are well worth considering in the overall mix of technologies required to help manage the cloud and cloud-delivered services.

6.5 Recommendations
Recommendations for enterprises
Enterprises need to consider a wide range of public-cloud-based issues from an ITSM perspective
Some public-cloud-based ITSM issues are obvious. An IT organization needs to have a good understanding of the IT services it provides, along with the service-delivery quality levels required, and the SLA targets agreed with the business. At minimum there will be end-user education needs to fulfill, and the potential for non-conformance needs to be managed. Business perceptions of the potential for degradation of service in terms of availability, speed of response, and security will also need to be managed. An area where many IT organizations will struggle is service costing, and an IT organization needs to ensure that oranges are compared with oranges financially when making decisions around public clouds.

Public clouds will necessitate the introduction or reintroduction of certain ITIL processes
Following service migration, there is potential for IT functions to lose both visibility and control of IT services within public clouds. ITFM and SLM will both be key ITSM capabilities, as will ramped-up supplier management activity, as IT organizations endeavor to manage a blended mix of on-premise and public-cloud-based IT service delivery. The ITIL v3 service catalog management process and enabling technology should be used not only for the design and costing of public-cloud-delivered services, but also for self-service provisioning of private cloud services, supporting the cloud ethos of agility and cost-efficiency.

IT organizations need to reassess roles and responsibilities, and peoples skills and capabilities, to reflect cloud-based changes
There is no doubt that IT organizations will need to evolve to reflect the change in focus caused by the externalization and loss of immediate management of some IT infrastructure and IT services. However, even in an IT organization that migrates all IT services to public clouds there will still be a need for resource to manage service delivery using best-practice ITSM processes, and for systems administrators and networking professionals to acquire new skills for the new complexities of IT service delivery that public clouds bring. Understanding and applying the service lifecycle is key, with IT staff viewing and managing technology provision as discrete IT services aimed at consistently meeting business needs for the technology enablement of business processes. SLM will be critical, as the ability to effectively manage service delivery from a third party will become paramount from a business-continuity perspective. As IT becomes more of a conduit between external providers and internal customers and consumers, softer skills will become increasingly important for the majority of IT staff, as they need to work more closely with the business and its component business units or functions.

CHAPTER 6: PUBLIC CLOUDS REQUIRE IT SERVICE MANAGEMENT TO ADAPT

109

People policies need to reflect the risk of non-conformant IT service procurement


While public clouds are often touted as offering a wealth of benefits and opportunities to both IT and the business, they also create the potential for business people to circumvent the IT department, directly procuring public-cloud-based services to meet ever-pressing business needs for rapid technology enablement. Not only can the application be designed outside the control or guidance of the IT organization, it can now potentially also be delivered. The IT function will be blissfully unaware of the situation and the relevant business people will probably be happy with their decision to go it alone until something goes wrong. IT organizations need to act now to help prevent this scenario occurring. The IT function needs to work with the business to ensure that people are fully aware that the only way to procure IT services is via the IT function and that they understand the reasons why. Beyond communicating with and educating people, however, the IT function must also ensure that it has worked with human resources to ensure that disciplinary procedures are in place so that people are both dissuaded from and penalized for procuring IT services outside of the business-agreed IT route. While this may seem draconian, an enterprise cannot afford to be placed at risk by maverick or even misguided individuals endangering business operations by independently procuring IT services without the necessary level of IT governance applied by experts within the IT function.

IT organizations need to decide upon the best approach to, and tools for, managing a hybrid data center environment
There are various options available to enterprises. Niche cloud management vendors have already emerged with solutions to address a wide spectrum of cloud management needs. Traditional systems management vendors that have added virtualization capabilities to existing tools will do the same with cloud-based capabilities. Third-party vendors, spotting a gap in the market, will create solutions that allow cloud management vendor tools and systems management vendor tools to operate in their nonnative environments. However, no matter what the future holds for these vendors and their solutions, IT organizations will need to decide upon the best approach to managing a hybrid data center environment, a decision that needs to be considered and undertaken as part of the overall planning and costing of moving IT services to a public cloud, with the cost of new management tools factored in when calculating the total cost of ownership of public-cloud-based IT service provision.

Alternative views
ITIL v3 is not the only best-practice ITSM framework available
While ITIL v3 is consistently referred to throughout this paper, it is not the only option available to enterprises. In many ways ITIL can be considered documented common sense, and for many IT organizations the introduction of ITIL v2 in particular was a formalization of existing internal processes to improve consistency of application and the utilization of industry best practice where needed. So while ITIL has been a platform for worldwide IT service improvement, gaining strong brand awareness and associated popularity in the process, it is not the only option available to IT organizations. An organization could go it alone and use process improvement methodologies such as Six Sigma and benchmarking to drive incremental improvements of existing processes to support cloud. Alternatively, it could opt for the international ITSM standard ISO/IEC 20000, which represents a far greater challenge than ITIL but allows compliance to best practice to be measured. It could also opt for Control Objectives for Information and Related Technology (COBIT), the IT management framework that offers best-practice IT processes with a focus on IT governance and internal controls. So ITIL is definitely not the only option available to IT organizations, just currently the most popular.

110

PLANNING FOR CLOUD COMPUTING NOVEMBER 2010

Incorporating

IT Management and Strategy Report

OVUM Butler Group

CHAPTER 7: Glossary

WWW.OVUM.COM

Cloud computing IT (network, hardware, software) resources available on-demand either internally (private cloud) or from a third party (public cloud). Infrastructure-as-a-service Infrastructure-as-a-service (IaaS) combines computing and/or storage, as well as network resources based on standardized hardware (servers, switches, routers) and software (hypervisor, operation system, management) components and associated services (such as DHCP, DNS, LDAP and SSH). Multi-tenant architecture Multiple customers are served simultaneously from one system, rather than from individual instances set up for each customer. Data and processing are logically separated for security. The recoding of existing application software in order to do this involves reworking functions such as data indexing and searching. Platform-as-a-service Platform-as-a-service (PaaS) adds a new layer of software services on top of those usually found in IaaS to make it easier to develop and/or run applications. Private cloud Private clouds are positioned as next-generation data centers. Some define them as the aim of the data center evolution journey, a long patient maturation that starts with companies understanding what they currently have, and then shaping it slowly to achieve a fully dynamic shared infrastructure. Others emphasize the need to take shortcuts along the way by pushing parts of the data center ahead to deliver a focused return on investment. From this perspective, the private cloud is the part(s) of the data center that is ahead of the rest. Public clouds Public clouds are usually split into IaaS, PaaS and software-as-a-service (SaaS) clouds. Software-as-a-service SaaS combines application functionality delivery via a web browser with data encryption, transmission, access and storage services. It can be consumer-centric (such as Flickr photo storage, management and sharing offering); enterprise-centric (such as Salesforce.coms CRM offering) or both (such as Googles Gmail email offering).

CHAPTER 7: GLOSSARY

113

Incorporating

IT Management and Strategy Report

OVUM Butler Group

CHAPTER 8: Appendix

WWW.OVUM.COM

Further reading
Datamonitor (2010) 2010 Trends to Watch: Cloud Computing, January 2010, BFTC2534 Ovum (2010) Cloud computing fundamentals, August 2010, OVUM052638 Ovum (2009) Cloud computing in IT services: a primer, July 2009, OVUM051158 Ovum (2009) Data security in the cloud, May 2009, OVUM050904 Ovum (2010) Identity services in the cloud, September 2010, OVUM052712 Ovum (2010) Managed hosting: more of a utility than a cloud, March 2010, OVUM051982 Ovum (2010) The cloud computing strategies of global telcos, July 2010, OVUM052546 Ovum (2010) The clouds open for enterprise storage, June 2010, OVUM052491 Ovum (2010) The role of multi-tenancy in a cloud environment, June 2010, OVUM052476 Ovum (2010) Transformation and sustainability complement the cloud in managed services, June 2010, OVUM052366 Ovum (2010) Virtual private clouds: a very public/private affair, July 2010, OVUM052572

Methodology
Primary research/vendor briefings: ongoing briefings with technology vendors serving the government sector. Secondary research: industry publications, companies annual reports and press releases, and data from public databases.

Author(s)
Laurent Lachal, senior analyst, Ovum software group laurent.lachal@ovum.com Stephen Mann, senior analyst, Ovum software group stephen.mann@ovum.com

Ovum consulting
We hope that the analysis in this brief will help you make informed and imaginative business decisions. If you have further requirements, Ovums consulting team may be able to help you. For more information about Ovums consulting capabilities, please contact us directly at consulting@ovum.com.

Disclaimer
All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher, Ovum (a subsidiary company of Datamonitor).

CHAPTER 8: APPENDIX

117

Incorporating

OVUM Butler Group

IT Management and Strategy Report

WWW.OVUM.COM

This Report reveals: Why the public cloud market is more complex than expected. How private clouds are catching up with public cloud capabilities. Why hybrid clouds are the next frontier for the enterprise. That public cloud pricing structures are evolving, but not always for the better. Why service-level agreements (SLAs) are key to cloud adoption. Where public clouds are changing the IT function. Why security is the number-one cloud quality of service (QoS) concern. Why reliability and availability are under increasing scrutiny. That cloud governance, like IT governance, is a work in progress. How public clouds are changing the IT service management (ITSM) landscape.

Incorporating

OVUM Butler Group


Ovum Europe
119 Farringdon Road, London, EC1R 3DA, United Kingdom t: +44 (0)20 7551 9850 e: info@ovum.com

Ovum Australia
Level 5, 459 Little Collins Street, Melbourne 3000, Australia t: +61 (0)3 9601 6700 f: +61 (0)3 9670 8300 e: info@ovum.com

Ovum New York


245 Fifth Avenue, 4th Floor, New York, NY 10016, United States t: +1 212 652 5302 f: +1 212 202 4684 e: info@ovum.com

Driving business value through collaborative intelligence

OI00005-006

Você também pode gostar