img > Issue XLVI: January 2011 > Client Virtualization in a Cloud Environment
Enrique Castro-Leon

Enrique Castro-Leon


Enrique Castro-Leon is an enterprise architect and technology strategist with Intel Corporation working on technology integration for highly efficient virtualized cloud data centers to emerging usage models for cloud computing.

He is the lead author of two books, The Business Value of Virtual Service Grids: Strategic Insights for Enterprise Decision Makers and Creating the Infrastructure for Cloud Computing: An Essential Handbook for IT Professionals.

He holds a BSEE degree from the University of Costa Rica, and M.S. degrees in Electrical Engineering and Computer Science, and a Ph.D. in Electrical Engineering from Purdue University.


rss  subscribe to this author

Bernard Golden

Bernard Golden


Bernard Golden has been called "a renowned open source expert" (IT Business Edge) and "an open source guru" ( and is regularly featured in magazines like Computerworld, InformationWeek, and Inc. His blog "The Open Source" is one of the most popular features of CIO Magazine's Web site. Bernard is a frequent speaker at industry conferences like LinuxWorld, the Open Source Business Conference, and the Red Hat Summit. He is the author of Succeeding with Open Source, (Addison-Wesley, 2005, published in four languages), which is used in over a dozen university open source programs throughout the world. Bernard is the CEO of Navica, a Silicon Valley IT management consulting firm.


rss  subscribe to this author

Miguel Gomez

Miguel Gomez


Miguel Gomez is a Technology Specialist for the Networks and Service Platforms Unit of Telefónica Investigación y Desarrollo, working on innovation and technological consultancy projects related to the transformation and evolution of telecommunications services infrastructure. He has published over 20 articles and conference papers on next-generation service provisioning infrastructures and service infrastructure management. He holds PhD and MS degrees in Telecommunications Engineering from Universidad Politécnica de Madrid.


rss  subscribe to this author


Client Virtualization in a Cloud Environment

Published: January 14, 2011 • SOA Magazine Issue XLVI PDF

Abstract: Arguably computation models seen in client space are much more diverse than those in the server space proper. For servers, there are essentially two, the earlier model of static consolidation and the more recent dynamic model where virtual machines lightly bound to their physical hosts and can be moved around with relative ease. With virtualized clients there are also two main models, depending on whether the application execution takes place in servers in a data center or on the physical client. Beyond that we have identified at least seven distinct variants, each architected to address specific management, security and TCO needs and with usage models with specific business scenarios in mind. At least for server–based clients, their presence may be an indication of technology convergence between clients and server products in cloud space, a continuation of the trend that started when clients were used as presentation devices for traditional three–tier applications. This article examines some of the general issues and concerns regarding client virtualization.

Server based computation models comprise session virtualization or terminal services and are captured
in Table 1.

Client based models comprise OS streaming, remote OS boot, application streaming, virtual containers and rich distributed computing, summarized in Table 2.

Classifying blade PCs, such as those provided by Hewlett Packard or ClearCube, depends on whether users are assigned to blades in a one–to–one or one–to–many basis. If each user is assigned a single PC blade, the model most closely resembles rich client except it can only be used in a fixed location and is constantly connected to the network. If an individual PC blade services multiple users simultaneously, the model more closely resembles virtual hosted desktop.

Each of these computation models have appropriate uses based on the business scenario, user needs, and infrastructure requirements. Intel's position is that the client-side execution models provide the best user experience and can be deployed to meet IT's requirements for security and manageability.

Infrastructure Requirements

Each compute model places unique demands on the enterprise infrastructure. Moving large amounts of client computation, graphics, memory and storage into a datacenter will likely require additional infrastructure build–out, unless the current equipment is grossly underutilized. Infrastructure issues to be considered include:

  1. Server computation capacity
  2. Network bandwidth (wired and wireless)
  3. Storage of user OSs, applications, data and customization profiles
  4. New connection brokers or remote access gateways
  5. New management tools
  6. Power delivery for additional computation, graphics, memory and storage now in the datacenter
  7. Cooling capacity of the datacenter
  8. Physical space within the datacenter
  9. Physical distance between the datacenter and associated clients

Client Devices and Compute Models

Conversations around compute models often get intertwined with the device on which it will be deployed. The analysis becomes easier if devices and models are treated separately. For example, the business scenario may dictate server–based computing for a certain application, such as a patient information database. However, this "thin client" model need not be deployed on a thin terminal. A desktop or laptop PC may actually be a more appropriate device, depending on a user's total application and mobility needs.

Mixed Compute Models

In most cases, IT will deploy a mix of computation models depending on needs for data security, performance and mobility. Individual users may have a hybrid of models. For example, a construction estimator in the field may use a cellular modem to access the centralized job scheduling tool via a terminal server session, but also have Microsoft* Office locally installed for word processing and spreadsheet work. The complete application and business needs of the user should be carefully parsed to understand which applications and data make sense to centralize versus install locally. Only in certain cases does a 100% server–side model make business sense.

Security Considerations

There is no such thing as perfect security. Protection is always a matter of degrees. For simplicity, let's constrain security considerations to software–based attacks (viruses, worms, software vulnerability exploits) and remote hacking, assuming that the user is not a malicious attacker. Let's exclude hardware–based attacks, such as videotaping screen images or attaching purpose–built attack hardware. No compute model is inherently immune to that class of attacks.

Benchmarking Applications

There are no industry standard benchmarks for alternative compute models. Under the current state of the art it is not meaningful to carry out performance comparisons across computation models. Performance comparisons can be attempted between models if a common workload is measured, but even under these conditions, issues such as network loading, number of simultaneous users, server and network speed, workload content and other factors can make simulation results much different than real–world deployments. IT managers should evaluate performance claims carefully to understand its applicability to their situations.

Streaming and Application Virtualization

Streaming and application virtualization are not synonyms, even though they are often used interchangeably. Streaming refers to the delivery method of sending the software over the network for execution on the client. Streamed software can be installed in the client operating system locally, or in most cases, it can be virtualized. With application virtualization, streamed software runs on an abstraction layer and does not install in the OS registry or system files. When shut down, a virtualized application may be removed from the client, or stored in a special local cache for faster launches or off–network use. The abstraction layer may limit how the virtualized application can interact with other applications. Application virtualization can also limit the continuous accumulation of randomness in the OS registry and system folders that lead to system instability over time.

Application versus Image Delivery

A helpful way to think of the models and how they fit with customer requirements is whether the problem needs to be solved at the application level or image level. In this case, an image is the complete package of the operating system and required applications. Some computation models solve application problems, some solve image problems. It is important to understand the customer's need in this area.

Public versus Private Images

When centrally distributing a complete desktop image with either virtual hosted desktop or OS streaming, it is important to comprehend the difference between a common public image and a customized private image.

Public images are standardized OS and application stacks managed, patched and updated from a single location and distributed to all authorized users. Files and data created by the applications are stored separately. Customization of the image is minimal, but since all users access a single copy of the OS and application, storage requirements are relatively small.

Private images are OS and application stacks personalized to each user. Although users enjoy a great deal of customization, each private image must be stored and managed individually, much like managing rich, distributed clients. Current products do not allow private images to be patched or updated in their stored locations, but rather require them to be actively loaded and managed in–band, either on the server or the client. The storage requirement of private images is much higher, since each user's copy of the OS and application must be stored.

For more information about cloud computing, please refer to the book Creating the Infrastructure for Cloud Computing: An Essential Handbook for IT Professionals by Enrique Castro–Leon, Bernard Golden, and Miguel Gomez.

Copyright © 2010 Intel Corporation. All rights reserved.

This article is based on material found in book Creating the Infrastructure for Cloud Computing: An Essential Handbook for IT Professionals by Enrique Castro–Leon, Bernard Golden, and Miguel Gomez. Visit the Intel Press web site to learn more about this book:, or our Recommended Reading List for similar topics:

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per–copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744. Requests to the Publisher for permission should be addressed to the Publisher, Intel Press, Intel Corporation, 2111 NE 25 Avenue, JF3-330, Hillsboro, OR 97124-5961. E-mail: