Greg Schulz

Greg Schulz

Biography

Greg Schulz is an independent IT industry advisor, author, blogger (http://storageioblog.com), and consultant. He has a B.A. in computer science and a M.Sc. in software engineering from the University of St. Thomas. Greg has over 30 years of experience across a variety of server, storage, networking, hardware, software, and services architectures, platforms, and paradigms. After spending time as a customer and a vendor, Greg became a Senior Analyst at an IT analysis firm covering virtualization, SAN, NAS, and associated storage management tools, techniques, best practices, and technologies in addition to providing advisory and education services. In 2006, Greg leveraged the experiences of having been on the customer, vendor, and analyst sides of the "IT table" to form the independent IT advisory consultancy firm Server and StorageIO (StorageIO). He has been a member of various storage-related organizations, including the Computer Measurement Group (CMG), the Storage Networking Industry Association (SNIA), and the RAID Advisory Board (RAB), as well as vendor and technology-focused user groups.

Greg has received numerous awards and accolades, including being named a VMware vExpert and an EcoTech Warrior by the Minneapolis-St. Paul Business Journal, based on his work with virtualization, including his book, The Green and Virtual Data Center (CRC Press, 2009). In addition to his thousands of reports, blogs, twitter tweets, columns, articles, tips, pod casts, videos, and webcasts, Greg is also author of the SNIA-endorsed study guide, Resilient Storage Networks-Designing Flexible Scalable Data Infrastructures (Elsevier, 2004).

Contributions

rss  subscribe to this author

Bookmarks



Cloud and Virtual Data Storage Networking: Virtualized Desktops and Servers Published: April 30, 2014 • Service Technology Magazine Issue LXXXIII PDF

Abstract: Virtual desktop infrastructure (VDI) is used to complement servers on virtual machines (VMs) and physical machines (PMs). While not all applications can be consolidated, most can be virtualized for use on virtual desktops and servers. Virtualization improves IT agility and network efficiency. It also allows for better data protection by enabling backup/restore, high availability (HA), business continuance (BC), and disaster recovery (DR). Key themes for this article include virtual desktop infrastructure, consolidation vs. virtualization of applications, and the opportunities afforded by virtual servers for cloud and storage networks.

Introduction

There are many issues, challenges, and opportunities involved with virtual servers for cloud and storage networking environments. Physical machines (PMs) or servers form the foundation on which virtual machines (VMs) are enabled and delivered. Virtual desktop infrastructure is utilized in conjunction with VMs in order to manage and protect virtual data. Many different software and hardware options are available for VDIs, which can function as displays or have computational abilities. Although caution is necessary when consolidating and virtualizing servers and desktops, the opportunities that arise from virtualization allow us to improve data protection and storage efficiency.

Virtual Desktop Infrastructure

A virtual desktop infrastructure (VDI) complements virtual and physical servers, providing similar value propositions as VMs. These value propositions include simplified management and a reduction in hardware, software, and support services at the desktop or workstation, shifting them to a central or consolidated server. Benefits include simplified software management (installation, upgrades, repairs), data protection (backup/restore, HA, BC, DR) and security. Another benefit of VDI similar to VMs is the ability to run various versions of a specific guest operating system at the same time, similar to server virtualization. In addition to different versions of Windows, other guests, such as Linux, may also coexist. For example, to streamline software distribution, instead of rolling images out to physical desktops, applications are installed into a VM that is cloned, individually configured if necessary, and made accessible to the VDI client. From a cloud perspective, VDIs are also referred to as Desktop as a Service (DaaS) (not to be confused with Disk as a Service or Data as a Service). VDI vendors include Citrix, Microsoft, and VMware, and various platform or client suppliers ranging from Dell to Fujitsu, HP, IBM, and Wyse.

The VDI client can be a zero device that essentially functions as a display, such as an iPad, Droid, or other smart phone or tablet. Another type of VDI client is a thin device that has less compute and expansion capabilities, with or without a HDD, requiring less maintenance as there are no moving parts or dedicated installed software images that result in a lower cost. Normal workstations, desktops, and laptops can also be used as thick clients where more capabilities are needed or for mobile workers who can benefit from the enhanced capabilities of such devices. By moving the applications and their associated data files to a central server, local storage demands are reduced or eliminated, depending on the specific configuration. However, this means that, with applications running in part or in whole on a server, there is a trade-off of local storage and I/O on a workstation or desktop with increased network traffic. Instead of the desktop doing I/O to a local HDD, HHDD, or SSD, I/Os are redirected over the network to a server.

Figure 1 shows a resilient VDI environment that also supports non-desktop VMs. In addition to supporting thin and zero VDI clients, mobile desktops are also shown, along with E2E management tools. The shared storage contains the VM images stored as VHD, VMDK, or OVF on shared storage that is also replicated to another location. In addition, some VMs and VDIs are protected as well as accessible via a cloud.

Depending on how the VDI is being deployed—for example, in display mode— less I/O traffic will go over the network, with activity other than during workstation boot mainly being display images. On the other hand, if the client is running applications in its local memory and making data requests to a server, then those I/Os will be placed on the network. Generally speaking, and this will vary with different types of applications, most workstations do not generate a large number of IOPS once they are running. During boot or start-up, there is a brief flurry of activity that should not be too noticeable, depending on your specific configuration and network.

img

Figure 1 – Virtual desktop infrastructures (VDIs).

If many clients boot up at the same time, such as after a power failure, maintenance, upgrade, or other event, a boot storm could occur and cause server storage I/O and network bottlenecks. For example, if a single client needs 30 IOPS either in a normal running state when it is busy or during boot, most servers and networks should support that activity. If the number of clients jumps from 1 to 100, the IOPS increases from 30 to 3000, well in excess of a single server HDD capability and requiring a faster storage system. Going further to 1000 clients needing 30 IOPS, the result is 30,000 IOPS of storage I/O performance. IOPS can be read or write and will vary at different times, such as more reads during boot or writes during updates. While this simple example does not factor in caching and other optimization techniques, it does point to the importance of maintaining performance during abnormal situations as well as normal running periods Part of a VDI assessment and planning should be to understand the typical storage I/O and networking characteristics for normal, boot, and peak processing periods to size the infrastructure appropriately.

VDI can also help streamline backup/restore and data protection along with antivirus capabilities, by centralizing those functions instead of performing them on an individual desktop basis. VDI considerations in addition to server, storage, I/O and networking resource performance, availability, and capacity should include looking at application availability requirements, as well as verification of which versions of guest operations systems work with various hypervisor or virtualization solutions. This includes verifying support for 32-bit and 64-bit modes, USB device support for encryption or authorization key, biometric security access control, video graphic driver capabilities, and management tools. Management tools include the ability to capture video screens for playing to support training or troubleshooting purposes, pausing or suspending running VDIs, and resource monitoring or accounting tools. Licensing is another consideration for VDIs, for the hypervisor and associated server side software and any updates to guest applications.

Cloud and Virtual Servers

Virtual servers and clouds (public and private) are complementary and can rely on each other or be independent. For example, VMs can exist without accessing public or private cloud resources, and clouds can be accessed and used by non-virtualized servers. While clouds and virtualization can be independent of each other, like many technologies they work very well together. VMs and VDIs can exit on local PMs or on remote HA and BC or DR systems. VMs and VDIs can also be hosted for BC and DR purposes or accessible for daily use via public and private clouds. Many public services, including Amazon, Eucalyptus, GoGrid, Microsoft, and Rackspace, host VMs. Types of VMs and formats (VMDK, VHD, OVF) will vary by service, as will functionality, performance, availability, memory, I/O, and storage capacity per hour of use. The benefit of using a cloud service for supporting VMs is to utilize capacity on demand for elasticity or flexibility to meet specific project activities such as development, testing, research, or surge seasonal activity. Some environments may move all of their VMs to a service provider, while others may leverage them to complement their own resources.

For example, at the time of this writing, Microsoft Azure compute instance pricing for an extra-small VM with 1.0-GHz CPU, 768 MB of memory, 20 GB of storage, and low I/O capabilities is about $0.05 per hour. For a medium-size VM with 2 × 1.6-GHz CPU, 3.5 GB memory, 490 GB storage, high I/O performance, cost is about $0.24 per hour. A large VM with 8 × 1.6-GHz CPU, 14 GB memory, 2 TB of storage with high performance costs about $0.96 per hour. These are examples, and specific pricing and configuration will vary over time and by service provider in addition to SLAs and other fees for software use or rental.

Can and Should All Servers or Desktops Be Virtualized?

The primary question should not be whether all servers, workstations, or desktops can or should be virtualized. Rather, should everything be consolidated? While often assumed to mean the same thing, and virtualization does enable consolidation, there are other aspects to virtualization. Aggregation has become well known and a popular approach to consolidate underutilized IT resources including servers, storage, and networks. The benefits of consolidation include improved efficiency by eliminating underutilized servers or storage to reduce electrical power, cooling requirements, floor space, and management activity, or to reuse and repurpose servers that have been made surplus to enable growth or support new application service capabilities.

For a variety of reasons, including performance, politics, finances, and service-level or security issues, not all servers or other IT resources, including storage and networking, lend themselves to consolidation. For example, an application may need to run on a server at a low CPU utilization to meet performance and response-time objectives or to support seasonal workload changes. Another example is that certain applications, data, or even users of servers may need to be isolated from each other for security and privacy concerns.

Political, financial, legal, or regulatory requirements also need to be considered with regard to consolidation. For example, a server and application may be owned by different departments or groups and, thus, managed and maintained separately. Similarly, regulatory or legal requirements may dictate, for compliance or other purposes, that certain systems are kept away from other general-purpose or mainstream applications, servers, and storage. Another reason for separation of applications may be to isolate development, test, quality assurance, back-office, and other functions from production or online applications and systems and to support business continuance, disaster recovery, and security.

For applications and data that do not lend themselves to consolidation, a different use of virtualization is to enable transparency of physical resources to support interoperability and coexistence between new and existing software tools, servers, storage, and networking technologies—for example, enabling new, more energy- efficient servers or storage with improved performance to coexist with existing resources and applications.

Another form of virtualization is emulation or transparency providing abstraction to support integration and interoperability with new technologies while preserving existing technology investments and not disrupting software procedures and policies. Virtual tape libraries are a commonly deployed example of storage technology that combines emulation of existing tape drives and tape libraries with disk-based technologies. The value proposition of virtual tape and disk libraries is to coexist with existing backup software and procedures while enabling new technology to be introduced.

Virtualization Beyond Consolidation: Enabling IT Agility

Another facet of virtualization transparency is to enable new technologies to be moved into and out of running or active production environments to facilitate technology upgrades and replacements. Another use of virtualization is to adjust physical resources to changing application demands such as seasonal planned or unplanned workload increases. Transparency via virtualization also enables routine planned and unplanned maintenance functions to be performed on IT resources without disrupting applications and users of IT services.

Virtualization in the form of transparency, or abstraction, of physical resources to applications can also be used to help achieve energy savings and address other green issues by enabling newer, more efficient technologies to be adopted faster. Transparency can also be used for implementing tiered servers and storage to leverage the right technology and resource for the task at hand as of a particular point in time.

Business continuance and disaster recovery are other areas where transparency via virtualization can be applied to in a timely and cost-efficient manner in-house, via a managed service provider, or in some combination. For example, traditionally speaking, a BC or DR plan requires the availability of similar server hardware at a secondary site. A challenge with this model is that the service and servers be available when needed. For planned testing, this may not be a problem; in the event of disaster, howver, a first-come, first-served situation could be encountered due to contention of too many subscribers to the same finite set of physical servers, storage, and networking resources.

Figure 2 shows the expanding scope and focus of virtualization beyond consolidation. An important note is that Figure 2 is not showing a decrease or de-emphasis of consolidation or aggregation with virtualization, but rather an overall expanding scope. In other words, there will continue to be more virtualization across servers, storage, workstations, and desktops for consolidation or aggregation purposes. However, there will also be an expanding focus to include those applications that do not lend themselves to consolidation or that are not in high-density aggregation scenarios.

A high-density consolidation virtualization scenario might be dozens of VMs per PM, where, in the next wave, some systems will be deployed with a single or a couple of VMs to meet different SLO, SLA, and QoS requirements—for example, a SQLserver database supporting a time-sensitive customer facing applications needs to meet specific performance QoS SLOs from 7 a.m. to 7 p.m. While it is possible to have other VMs as guests on the same PM, for QoS and SLA requirements, instead of consolidating the SQLserver database and its applications, it will be deployed on a faster server with plenty of memory to get more work done faster with a by-product of improved productivity. The reason for placing the SQLserver database in a VM with a dedicated PM is to gain agility and flexibility, including the ability to proactively move for HA or BC purposes and to facilitate easier DR.

img

Figure 2 – Expanding focus and scope of virtualization.

Another reason for placing the SQLserver database in a VM is that during the 7 a.m.–7 p.m. prime-time period, the PM is dedicated to that application, but during off hours, other VMs can be moved onto the PM. For example, the fast PM can be used for running nightly batch or other applications in addition to using the PM for IRM tasks such as backup or database maintenance. The net result is that the PM itself is used more effectively around the clock, while making a faster resource available to a time-sensitive application and thereby achieving efficiency and effectiveness. In other words, leverage the x86 servers in a manner similar to how larger proprietary and mainframe systems have been managed to higher efficiency and effectiveness levels in the past. It’s not always about how many VMs you can put on a PM, but rather how that PM can be used more effectively to support information services.

If you are a vendor, a value-added reseller (VAR), or a service provider and ask a prospective customer if he is looking to virtualize his servers, desktops, or storage, and he replies that he is not, or only on a limited basis, ask him why not. Listen for terms or comments about performance, quality of service, security, consolidation, or third-party software support. Use those same keywords to address his needs and provide more solution options. For example if the customer is concerned about performance, talk about how to consolidate where possible while deploying a faster server with more memory to address the needs of the application. Then talk about how that fast server needs fast memory, fast storage, and fast networks to be more productive and how, for HA and BC, virtualization can be combined. The net result is that instead of simply trying to sell consolidation, you may up-sell a solution to address different needs for your customer, helping him be more effective as well as differentiating from your competitors who are delivering the same consolidation pitch.

Similar to servers, the same scenarios for virtual desktops apply in that some workstations or laptops can be replaced with thin or stripped-down devices for some usage or application scenarios. However, where the focus is enabling agility, flexibility, and reducing IRM costs, workstation or desktop virtualization can be used for non-thin clients. For example, a workstation or laptop for a user who needs portability, performance, and access to localized data may not be an ideal candidate for a thin device and a virtual desktop. Instead, consider a hypervisor or VM existing on the PM to facilitate IRM activities, including software install or refresh, HA, and BC. For example, when something goes wrong on the workstation it is usually tied to a software issue as opposed to hardware.

A common solution is to reload or rebuild the software image (e.g., re-image) on the workstation. Instead of sending the workstation in to be re-imaged or dispatching someone to repair and re-image it, virtually repair the device and software. This is possible via a hypervisor installed on the PM or workstation with a primary guest VM for the user and maintenance VMs that can be used for rapid refresh or re-image. Instead of reloading or re-imaging, use a VM clone followed by restoration of unique settings and recent changes.

Common Virtualization Questions

What is the difference between SRM for virtualization and SRM for systems or storage? In the context of server virtualization, and in particular VMware vSphere, SRM stands for Site Recovery Manager, a framework and tools for managing HA, BC, and DR. SRM in the context of storage, servers, and systems in general stands for Systems or Storage or Server Resource Management, with a focus on collecting information, monitoring, and managing resources (performance, availability, capacity, energy efficiency, QoS).

Does virtualization eliminate vendor lock-in? Yes and no. On one hand, virtualization provides transparency and the flexibility of using different hardware or supporting various guest operating systems. On the other hand, vendor lock-in can shift to that of the virtualization technology or its associated management tools.

Why virtualize something if you cannot eliminate hardware? To provide agility, flexibility, and transparency for routine IRM tasks, including load balancing and upgrades in addition to HA, BC, and DR.

Do virtual servers need virtual storage? No, virtual servers need shared storage; however, they can benefit from virtual storage capabilities.

What is meant by “aggregation can cause aggravation”? Consolidating all of your eggs into a single basket can introduce single points of failure or contention. Put another way, put too many VMs into a PM and you can introduce performance bottlenecks that result in aggregation.

I have a third-party software provider who does not or will not support us running their applications on a virtual server; what can we do? You can work with the application provider to understand its concerns or limitations about VMs. For example, the provider may have a concern about performance or QoS being impacted by other VMs on the same PM. In that case, do a test, simulation, or proof of concept showing how the application can run on a VM with either no other or a limited number of VMs, with no impact on the solution.

Another scenario could be that they require access to a special USB or PCI adapter device that may not be supported or shared. In the case of PCI adapters, work with the solution provider and explore some of the shared PCIe Multi-Root (MR) I/O Virtualization (IOV) topics discussed in Chapter 11. Another possibility is to work with your hypervisor provider, such as Citrix, Microsoft, or VMware, who may already have experience working with the third-party application provider and/or the server vendor and VAR you may be using. Bottom line: Find out what the concern is, whether and how it can be addressed, work through the solution, and help the third party enable its solution for a virtual environment.

Conclusion

There are many different uses for virtualization, from consolidation of underutilized systems to enabling agility and flexibility for performance-oriented applications. Not all applications or systems can be consolidated, but most can be virtualized. Virtualization can boost productivity and enable HA, BC, and DR for nonconsolidated servers. It is important to carefully consider applications that are being consolidated and/or virtualized in order to protect data and avoid introducing bottlenecks. Through the efficient implementation of application virtualization, though, many systems can be made safer and more effective.

This article is based on material found in the book Cloud and Virtual Data Storage Networking by Greg Schulz.

Visit the publisher’s page to learn about purchasing a copy of this book:
http://www.crcpress.com/product/isbn/9781439851739