ServiceTechMag.com > Archive > Issue XLV: December 2010 > Cloud Computing Basics
Enrique Castro-Leon

Enrique Castro-Leon

Biography

Enrique Castro-Leon is an enterprise architect and technology strategist with Intel Corporation working on technology integration for highly efficient virtualized cloud data centers to emerging usage models for cloud computing.

He is the lead author of two books, The Business Value of Virtual Service Grids: Strategic Insights for Enterprise Decision Makers and Creating the Infrastructure for Cloud Computing: An Essential Handbook for IT Professionals.

He holds a BSEE degree from the University of Costa Rica, and M.S. degrees in Electrical Engineering and Computer Science, and a Ph.D. in Electrical Engineering from Purdue University.

Contributions

rss  subscribe to this author

Bernard Golden

Bernard Golden

Biography

Bernard Golden has been called "a renowned open source expert" (IT Business Edge) and "an open source guru" (SearchCRM.com) and is regularly featured in magazines like Computerworld, InformationWeek, and Inc. His blog "The Open Source" is one of the most popular features of CIO Magazine's Web site. Bernard is a frequent speaker at industry conferences like LinuxWorld, the Open Source Business Conference, and the Red Hat Summit. He is the author of Succeeding with Open Source, (Addison-Wesley, 2005, published in four languages), which is used in over a dozen university open source programs throughout the world. Bernard is the CEO of Navica, a Silicon Valley IT management consulting firm.

Contributions

rss  subscribe to this author

Miguel Gomez

Miguel Gomez

Biography

Miguel Gomez is a Technology Specialist for the Networks and Service Platforms Unit of Telefónica Investigación y Desarrollo, working on innovation and technological consultancy projects related to the transformation and evolution of telecommunications services infrastructure. He has published over 20 articles and conference papers on next-generation service provisioning infrastructures and service infrastructure management. He holds PhD and MS degrees in Telecommunications Engineering from Universidad Politécnica de Madrid.

Contributions

rss  subscribe to this author

Bookmarks



Cloud Computing Basics

Published: December 07, 2010 • SOA Magazine Issue XLV
 

Abstract: Many users of computer technology–and for that matter, many technology creators and administrators–complain about the rapid pace of change in information technology. The most recent example of a new technology trend bursting upon the scene is cloud computing. Setting a record for going from "what is it?" to "I've got to have it," cloud computing for many people seems to represent a revolution in how computing will be done in the future. It's important; however, to understand that despite its sudden arrival, cloud computing is actually the latest manifestation of well–established trends, each of which has brought new benefits and new challenges to those working in IT. It is Crucial to understand that cloud computing signifies a movement away from IT–centric product focus and signals a re–engagement with computing users, made possible by those long–established trends.


Introduction

Cloud computing is the latest–and hottest–technology trend going. Many people see it as crucial to the next step in enterprise computing, and an inevitable development in internal data centers. But what is it?

It doesn't take long while examining the buzz around cloud computing to realize the definition of cloud computing is, well, cloudy. There are tons of definitions around, and each day brings someone else's definition to the mix.

At HyperStratus, instead of adding to the cacophony of definitions, we refer to one promulgated in the February 2009 report by the UC Berkeley Reliable Adaptive Distributed Systems Laboratory. This organization, also known as the RAD Lab, identified three characteristics of cloud computing:

  • The illusion of infinite computing resources available on demand, thereby eliminating the need for cloud computing users to plan far ahead for provisioning;
  • The elimination of an upfront commitment by cloud users, thereby allowing companies to start small and increase hardware resources only when there is an increase in their needs;
  • The ability to pay for use of computing resources on a short–term basis as needed (for example, processors by the hour and storage by the day) and release them as needed, thereby rewarding conservation by letting machines and storage go when they are no longer useful.

What do these three characteristics mean in real–world environments?


The Illusion of Infinite Computing Resources

In typical data centers, an application's resources are relatively fixed. There is little opportunity to adjust the amount of compute resources devoted to an application in a short timeframe. Consequently, calculating the amount of resources to devote to the application is performed in a process known as "capacity planning." Since it is difficult to truly forecast what usage patterns will be during an application's lifetime, it is to be expected that the capacity planning exercise will be a rough guess. Consequently, application planners have two options available:

Overprovision: This option requires purchasing more equipment than the forecast calls for. Doing this means the application will always have adequate resources available, but also means that for some or all of its life extra capital will be tied up in unneeded equipment.

Under provision: This option purchases just the amount of equipment called for in the capacity planning exercise. However, it faces the risk that too little equipment will be available if demand grows significantly. Consequently, because the lead time on obtaining new hardware resources is typically lengthy, this option will often lead to applications running slowing or being unavailable due to overload.

One might summarize these two options thusly: overprovision, and throw away money; or under provision, and throw away users.

Cloud computing addresses this issue by offering a very large pool of compute resources that can be assigned on a dynamic basis to applications as needed.

This "infinite scalability" obviates the challenge of capacity planning. It's not necessary to forecast future system load; if system load increases, more resources are requested and added to the application's resource pool.

Cloud computing moves the capacity planning role and responsibility from the application to the cloud provider. This enables capacity planning to be performed over a larger overall pool of application load, enabling demand smoothing over a larger number of total compute resources.


No Upfront Commitment

Because the amount of compute resources that will be required by an application is unforeseeable, it's important that no commitment to consume a particular level of resources be required. Otherwise, cloud computing would be no different than the old days of capacity planning, with the same choices: overprovision and throw away capital, or under provision and throw away users. Consequently, for cloud computing to fulfill its vision, long–term commitments to specific resource use levels must not be required.

In a world of no long–term resource commitments by applications, the cloud provider takes on the responsibility for delivering sufficient resources. The user, being able to leverage the "infinite scalability," is freed from long–term decisions. One might expect that other responsibilities will fall on the user–perhaps a higher cost for shorter–term use, an upfront payment for access to the cloud system, and so on.

One important point about the lack of commitment is the fact that users are able to obtain resources for as long–or as short–as necessary, which is to say that the scalability discussed in the previous section is therefore bidirectional; in other words, an application can, as demand falls, be scaled down in terms of the overall amount of compute resource devoted to the application. Absent a lack of commitment, organizations might be reluctant to consume additional resources for fear of taking on a lengthy financial burden for resources needed on a transitory basis.


Pay–as–You–Go Resource Use

Since no commitment beyond actual need is required in cloud computing environments, payment must be tied to something other than ownership or long–term contractual commitment.

Within cloud computing environments this implies payment tied to direct resource use, known as pay as you go. Typical payment schemes are charged on the basis of processor–hour, monthly storage per gigabyte, and so on.

Tying costs to resource use offers a number of benefits:

  • It ties costs to consumption. Many IT organizations assess costs in an opaque fashion, making it difficult for user organizations to understand the basis for the assessed costs. Tying payment to transparent measures of resource consumption make it much easier to understand the cost rationale for IT services.
  • It ties costs to value. Many user organizations feel that the costs of their IT resources are disproportional to the value they receive from their applications. When costs are directly tied to resource consumption, the connection between application cost and the value received by running the application is much clearer.
  • It leads to better use of computational resources. Since the value associated with individual compute resources can be transparently examined, it is much easier to determine whether applications are cost-justified. Should an application consume more resources than the value generated by application operation, it is easy to shut down the application and stop wasting money on low–value compute activities. Of course, the lack of long–term commitment discussed in the previous section makes this application culling possible.
  • It improves the economic ecosystem efficiency overall. Service providers essentially pool services across their customer base leading to reduced stranded capacity. Also, successful service providers become experts at what they do and hence are able to deliver a service more efficiently at a lower cost than what a similar service would cost in house. It's a win-win situation where service providers make a profit while delivering a service at a lower cost than what it would cost to deliver it in house.

The Benefits of Cloud Computing

The RAD Lab's cloud computing characteristics imply several benefits for application developers and users:

  • The tradeoff between provisioning and budget discipline is no longer necessary. Because cloud environments offer, effectively, infinite resources on demand, application creators need no longer attempt to forecast future application load; instead, they can deploy at a resource level appropriate to initial demand, secure in the knowledge that increased application load can be easily met through additional resource allocation.
  • Cloud scalability fosters a new breed of applications. Applications that require massive amounts of compute resources are one type of system that space–constrained internal data centers struggle to support. Another type of application poorly suited to traditional data center environments are those that have very large variability in load; because of the difficulty of adding and subtracting compute resources in traditional environments, application creators typically had no opportunity to deploy high load-variability applications. In cloud environments, these kinds of applications–very large analytical or data mining systems are a common example–can easily be accommodated.
  • Cloud computing will foster new business models. This book has not yet been written. We are in the middle of a significant transformation in the industry, and we won't know what works and what does not until the dust settles. This situation should not preclude experimentation. Visionary organizations can make educated guesses, and due to the pay–as–you–go nature of the cloud, experimentation does not require large capital outlays. This approach will optimize the upside while minimizing the down side. Visionary organizations will likely realize a first mover advantage, leaving the laggards catching up.
  • Both users and IT organizations benefit from cloud computing. IT organizations can ensure that resources are consumed only by organizations willing to directly pay for their use; this provides an efficient rationing mechanism and obviates the need for IT to adjudicate among competing demands for IT resources. From the perspective of IT users, cloud computing provides transparency in IT costs; more importantly, it allows IT users to control the decision about resource consumption. Assuming the user organization is willing to pay, resources are available. This offers user organizations the opportunity to map their use of compute resources to their business needs in a fine–grained fashion–certainly the ideal of cost/benefit balance.

Implementation Challenges of Cloud Computing

The vision of cloud computing is clear: control of resource provisioning is shifted to the application group–the IT organization responsible for the application–away from the operations group–the IT organization responsible for overall resource provisioning. A different way of saying this is that the operations groups retains responsibility for ensuring overall capacity is sufficient for the total demand load of all applications, but individual application groups now have the responsibility (and right) to make individual provisioning decisions as they see fit.

Obviously, this is a vastly different approach to managing infrastructure–one that is revolutionary, carrying enormous benefits, but one that disrupts the traditional organization and processes of IT organizations. One very exciting aspect of cloud computing is that it focuses IT more on responding to application–also known as end user–needs, rather than internal IT needs. The phases of virtualization described as server consolidated and infrastructure automation focus primarily on improving IT operations efficiency-admirable, no doubt, but with little effect on application availability and efficiency. Remember, applications are the real point of IT: delivering functionality that improves business operations, reduces costs, and increases opportunities. An IT organization that increases its efficiency but fails to increase the usefulness of applications is failing at its raison d'etre.


Real–time Capacity Planning

Unlike older systems, in which total physical demand provisioning was a relatively relaxed effort with extended timeframes made necessary by the lengthy delivery cycles of vendors, cloud computing requires real–time capacity planning. Because application groups can "click a mouse button" to access more resources, it's crucial that IT operations ensure sufficient physical resources are available in a real–time fashion.

This need for real–time capacity planning will stress IT operations as it's never been stressed before. It will have little insight into total system demand, since total demand can vary according to individual application group decisions. It must commit to application group's demands never coming up "empty" when additional resources are requested.

A good metaphor for what IT operations must put into place is the "just–in–time" inventory systems common to automobile manufacturing today. Previous generations of auto manufacture relied on massive local inventories of parts to ensure production was never interrupted. The just–in–time revolution substituted accurate forecasting and advanced logistics to pare physical inventories and reduce costs. Real–time capacity planning for IT operations will impose the same conditions.


Orchestration

Orchestration is the term applied to a coordinated set of actions needed to provision a cloud–based application infrastructure. In the past, creating an application infrastructure required the involvement of several different groups: servers, network, storage, security, and perhaps even others. This imposed time delays and coordination challenges.

Orchestration moves all of the different manual processes into a single automated process that coordinates all individual resource provisioning. It is what brings to life the "click a mouse button and provision a system" vision of cloud computing. Simple. Fast. Easy.

As the saying goes, making it look easy is hard work. Orchestration is no different.

At the lowest level, orchestration depends upon a completely automated infrastructure that may be provisioned without needing any manual effort. That automated infrastructure must be implemented throughout the entire portion of the data center in which cloud computing is to exist.

At the next level up, orchestration requires a client application through which to interact with the automated infrastructure and define resources to be provisioned. This is typically delivered as a Web page with a portal infrastructure behind it, interacting with the whatever management software drives the automated infrastructure. An approved user defines the desired system resources by filling out the Web page, selecting the configuration for the virtual machine needed by the application. The configuration includes information on the needed number of processors, amount of storage, any specific network requirements (for example, a particular VLAN the application needs to reside within), amount of memory to be available to the virtual machine. Particularly sophisticated orchestration systems can define and create multiple virtual machines as part of an application topology. The virtual machines would typically each be assigned a role: one might act as a database server, several might act as web servers, and one or more might act as middleware machines in which application logic would execute. In essence, the orchestration system embodies the knowledge that system administrators would apply in creating application resources. As might be imagined, the effort necessary to capture that knowledge and implement it as a set of automation rules is not trivial.


Chargeback

Chargeback accomplishes the task of assigning infrastructure costs to the application group that uses that infrastructure. Chargeback has a long and controversial history within IT. In mainframe settings, chargeback was common and direct. With the rise of distributed computing, it is much harder to assess and assign costs to any particular application, since multiple applications may share a machine. Moreover, it's difficult to assign more general costs like floor space and power, especially since those costs are often paid by a non–IT group. Virtualization has made chargeback somewhat easier, since in a virtualized world, applications tend to reside in a single virtual machine, which can simplify cost assignment.

Chargeback takes on increased urgency in a cloud computing environment. This is because the ease of requesting resources via the orchestration system is likely to drive up the overall demand for resources. In a financial climate in which IT operations groups are unlikely to receive increased budgets, increased demand will cause contention for resources, making it vital to implement a rationing mechanism. Chargeback provides that mechanism, since application groups can identify exactly how much the compute resources they've requested cost when running in a production mode. The importance of chargeback is highlighted by the fact that the UC Berkeley RAD Lab Report identifies "pay–as–you–go" as one of three key characteristics of cloud computing.

Like orchestration, chargeback is not easy to implement. Tracking down all of the costs to be included in chargeback calculations is difficult, particularly given the fact that many of them fall into different budget areas. To mitigate this problem, a limited form of chargeback that merely assigns direct costs for an application's virtual machines, network, and storage may be used as a substitute for a complete chargeback system.

Overall, the challenges posed by implementing a cloud computing environment should not be underestimated. Both infrastructure and process change are needed, requiring financial investment as well as organizational restructuring. An IT organization's resources may dictate whether it takes on the effort and investment in order to implement an internal cloud; if it is reluctant or unable to make the necessary resources available, it is more likely to turn to an external provider for its cloud needs. If an external cloud provider is used, that does not mean that the items identified just above are no longer needed; it just means that another organization takes on the responsibility for implementing them.


Cloud Computing: The Future

Much of the attention paid to cloud computing has focused on its agile provisioning–the way it can speed up initial system provisioning from weeks (or months) to mere minutes. There is no doubt that it delivers on quick provisioning, thereby addressing a traditional pain point for application groups. This quick provisioning enables them to commence work on new applications rapidly, or to respond to changed business conditions nimbly–certainly a huge advantage in today's tumultuous business climate, and definitely an improvement upon typical provisioning conditions.

However, less attention has been paid to the innovation opportunities available through the "infinite scalability" cloud computing provides. Applications that heretofore would never have been considered due to budgetary restrictions, data center space restrictions, or inability to manage highly variable loads can now be accommodated in the "infinitely scalable" cloud. Examples of these kinds of "infinitely scalable" enabled applications include massive business intelligence analysis, intermittent application loads, and highly seasonal processing requirements. The current data deluge being experienced by most businesses, along with the growing trend of mobile and location-aware applications, means that more of these type of applications will be needed by most mainstream businesses. Cloud computing can help.

For more information about cloud computing, please refer to the book Creating the Infrastructure for Cloud Computing: An Essential Handbook for IT Professionals by Enrique Castro–Leon, Bernard Golden, and Miguel Gomez.


Copyright © 2010 Intel Corporation. All rights reserved.

This article is based on material found in book Creating the Infrastructure for Cloud Computing: An Essential Handbook for IT Professionals by Enrique Castro-Leon, Bernard Golden, and Miguel Gomez.

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744. Requests to the Publisher for permission should be addressed to the Publisher, Intel Press, Intel Corporation, 2111 NE 25 Avenue, JF3–330, Hillsboro, OR 97124–5961. E-mail: intelpress@intel.com.