Expert Reference Series of White Papers CHOOSING THE CORRECT HYPERVISOR: A CRITICAL DECISION 1-800-COURSESwww.globalknowledge.com CHOOSING THE CORRECT HYPERVISOR: A CRITICAL DECISION Andy Cummings, MCT, MCTS, MCITP, MCSE, CISSP Introduction Hypervisor technology allows virtual machines running any software we choose to be backed up, restored, modified, copied, and even rolled back to a previous state with the click of a button. This has tremendous speed and efficiency benefits for all aspects of IT, from program development, to testing, desktop management, server management, troubleshooting, security, and maintenance. Any speed improvements with IT means faster prototyping and deployment throughout the entire business. Speed wins races. Virtualization with hypervisor technology is an investment that returns benefits on day one of your decision to implement. Immediately, the company can assess information technology with an immediate and realistic focus to consolidate resources, cut direct costs, gain speed, and even improve the quality of service to customers. These improvements are not an option; your competitors are doing it. How well you do it determines the speed advantage you have against your competition. The purpose of this white paper is to help you understand the major factors when choosing a hypervisor technology. At present, VMware, Microsoft, and Citrix are the major software vendors vying for your project dollars. There are core similarities in each vendor’s approach to virtualization, but some distinct differences within each product line. First, let’s clarify hypervisor terminology. Then, to properly put the hypervisor decision in context, we need to define the current state of major CPU and network technology. Lastly, we will focus on defining the critical factors you will need to clarify for your organization before investing in a specific vendor solution. Hypervisors Virtual machine manager (VMM) is another name for hypervisor. Hypervisors allow multiple instances of operating systems, known as guests, to run concurrently on a single device, defined as the host. Each virtual machine (VM) guest believes it has its own hardware. Hypervisors are software components designed to manage guest operating systems on hosts. Both the terminology and initial technology of hypervisors goes back to the 1960s. The IBM 360 mainframe was designed to run its own programs and also emulate the IBM 7080 system. Today, enabled by the dominance of inexpensive multicore CPUs and cloud computing, hypervisor technology is changing IT as we know it. Everyone, from end users to developers to administrators, has been or will be impacted by this technology. Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 2 What are the benefits of implementing hypervisor technology? Virtual hardware is very cheap and flexible. Why should a company be locked into buying dozens of physical servers for different applications when many of those systems can be consolidated and virtualized onto a much smaller number of servers? Server vendors have sold millions of units that are frequently underutilized. Virtualization will reduce this overspending on server resources. This is evolutionary efficiency in the tech world. Fewer servers means less cost right out of the gate. Managing virtual machines vastly multiplies administrative power, reducing the time and effort to achieve desired project results. Type 1 Hypervisors Type 2 Hypervisors Type 1 and Type 2 Hypervisor Architectures There are two commonly defined hypervisor types: Type 2 hypervisors only run on top of a host operating system such as Linux, Windows, or the Mac OS. This means there is an additional layer between the virtual machine and the hardware. Performance suffers. This is fine if you want to run a second desktop OS or a test environment on a workstation, but not if you want to run any mission-critical applications from your data center. VMware Workstation, Windows Virtual PC, or VirtualBox are Type 2 hypervisors designed to run on desktop systems for compatibility, testing, or security purposes. Type 2 hypervisors are not designed to handle anything other than small workloads on an individual workstation. Type 1 hypervisors, on the other hand, do not require an operating system on the host computer because, essentially, the hypervisor IS the operating system on the hardware. These are “bare-metal” hypervisors. Removing the traditional operating system from the host computer generates a tremendous performance boost for the Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 3 virtual machines. Higher-end products such as VMware ESXi, Microsoft Hyper-V, and Citrix XenServer are Type 1 hypervisors designed to handle enterprise-level functions on server hardware. This is the real battleground. Vendors are racing to supply software, hardware, and cloud data center solutions to meet the ever growing demand. Companies must decide which applications and servers to virtualize, whether to host in-house or outsource, and what parts of the business can be enhanced by this technology. Energy costs are often the largest ongoing chunk of overhead in any data center. Just the reduced energy costs from running fewer servers in a datacenter is enough to realize that this technology is vitally important. Since energy costs also seem to do nothing but go up, any technology that seriously reduces this expense is in high demand. The key in IT has always been about maximizing resource usage while reducing costs. It’s all about doing more with less. The best answers to particular project needs are often hidden beneath the surface. You have to think of each technology, and each vendor’s solutions, as moving targets, changing, evolving with their own strengths and weaknesses. Simply comparing vendor solutions head-to-head is often too simplistic. With any significant investment, it is important to have an understanding of where each vendor is going and what they plan for the future. Sometimes the weakest product at the moment has the best long-term design model. Regardless of the exciting potential of any vendor offering, always remember that you do not want to paint yourself into a corner without any way to jettison a software or hardware product very quickly if necessary. Many an IT pro has lead an organization down a dead-end alley. Today, the keys are open standards, excellent conversion tools, and well-qualified pros running the systems. Cloud Computing Cloud computing initially emerged as a means for easy storage and retrieval of data, e-mail, documents, and project management tasks, from Internet-based servers accessible from nearly anywhere on the planet. The cloud movement has since evolved into transitioning more mission-critical business functionality out to hosted, less-expensive, virtualized environments. The challenge for IT departments is to take advantage of the cost benefits of cloud computing but not give up the security and performance of traditional internal networks. In other words, companies are struggling with how to best build their own private clouds. Many vendors now use hypervisor technology to offer private cloud services or Infrastructure as a Service (IaaS). It is important to note, however, that there are many applications that are just not ready for virtualization. Virtualization may lead to solid cost reductions, but lost speed, CPU cycles, and throughput can be deadly to certain applications. CPU, network, and hypervisor software are changing very rapidly, and we have to know when is the correct time to move specific workloads to a virtualized environment. Too soon and the cost savings can be lost through poor performance. Too late and potential virtualization savings can be lost. Understanding this dynamic means having an insight into the current constraints of creating smaller and faster processors. Virtualization strategies are highly dependent on current CPU architecture. As the CPU evolves, so do new virtualization options. Moore’s law says we will continue to shrink transistors and put more and more Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 4 on a chip as we have done for several decades now. That’s still in play today, but we can’t create a 15 Ghz chip because we can’t keep it cool. We don’t know yet how to break the walls of physics and thermodynamics to deal with the heat. Virtualization depends highly on the chips that run the servers. It’s not really an ‘if’ question for virtualization; it’s really just when virtualization happens, and that depends on the needs of the app versus the current capabilities of virtualization hardware and software. Multiple-core Chips Chip makers have sidestepped away from creating the fastest chip to building chips containing multiple cores. In essence, a dual-core chip has two processors in one, a consolidation very similar to virtual machine technology. However, the two cores are not as powerful as two separate physical processors, which is, again, much like virtual machine technology. Multi-core systems have fueled the rise of virtualization because they provided the lowcost horsepower needed at the CPU level to support numerous concurrent virtual machines in the data center. There are several critical factors to consider when selecting hypervisor technology: Maturity of the software, consolidation ratios, hardware costs, software costs, management tools, training, drivers, virtual machine performance, high availability solutions, and training. Each of these areas needs to be addressed by your decision-making team. Critical factors to consider when selecting hypervisor technology Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 5 Maturity As software products go, maturity does not simply mean that the code-base has been through years of iterations. Maturity means no matter what the age of the product, the code is stable, does what it is supposed to do, and keeps your virtual machines up and running as well or better than they performed on their own server hardware. Consolidation Ratios The major point of virtualization is to reduce costs, both direct and indirect. If you can run 15 virtual servers on a single server without any performance degradation, that’s 14 fewer servers you need - the ones that used to max out your energy bills and IT staff. There’s no hard and set rule here, but you know how many servers you probably want to virtualize, and you know what those servers need now to run. That’s more than enough to do some simple math and then find the vendor who will offer you the best consolidation ratio and, thus, the lowest hardware output. Hardware Requirements See consolidation ratios above. If you think of your applications and data like your kids getting on a school bus, then the virtual machine manager software and the server hardware are the iron doing the carrying. You’d better feel comfortable with both the driver and the bus before you put your kid on there. This is not the time to go cheap on the hardware. The number of required physical servers will decrease, but the quality and power of your new virtualization hub servers must not. In fact, these servers have to be better because of the consolidated workloads. Just remember that by consolidating your servers, you have fewer physical points of failure; however, failure of a virtual server has the same impact on the non-IT departments as a physical server failure. Virtualization should allow you to bring your users back online faster than ever before. However, if the hardware is bad and is failing when it should not, then that hardware is costing you money, again and again. Software Requirements This is all about licensing and components, or money and management, if you will. Licensing costs are a huge factor with all the major VMM players. Each provider has a different pricing model for licensing. Misunderstanding these sometimes IRS-like pricing structures will skew your data badly. NEVER attempt to decipher a vendor licensing system on your own - preferably have someone else in the room, or at least have someone check and double-check your interpretation. Each VMM provider offers a different management solution. Make sure you understand additional software components required to fully manage a vendor’s virtualization solution. Many of these additional components have their own licensing requirements. Management Operations As in many projects, there will most likely be a surge in expenses followed by a gradual reduction over time. The key is to constantly have team members on the lookout for reducing costs. Virtualization will create a new Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 6 operational structure within the IT department. This is good change, but change, nonetheless. Make sure your implementation and maintenance teams are prepared. Training If the hypervisor software and server hardware is the bus, then your virtualization architects and administrators are the bus drivers. Bad decisions in design and implementation cost money just as much as a failed physical server. Training and verification of skills is critical. Drivers There is a heated debate among the major VMM vendors on whether it is important to have hardware and software drivers optimized specifically for virtual environments. Just know that no one argues any more about the significance of drivers in regards to system stability. Bad drivers mean bad systems, whether we are discussing physical or virtual machines. Virtual Machine Performance We’ve already discussed a benchmark, a fairly low one at that, that virtual systems should meet or exceed the performance of their physical counterparts, at least in relation to the applications within each server. Everything beyond meeting this benchmark is profit. Ideally, you want each hypervisor to optimize resources on the fly to maximize performance for each virtual machine. The question is how much you might be willing to pay for this optimization. The size or mission-criticality your project generally determines the value of this optimization. Memory Management Aside from managing physical and virtual CPUs, memory management is a critical and core function of any hypervisor. Since each virtual machine uses system memory when online, and since larger memory units are still expensive, efficient memory management keeps costs down in the long run and optimizes performance immediately. High Availability The key question here: What if ‘it’ breaks? – ‘it’ being everything from a single virtual machine, to the full hypervisor, or a physical server. Each major vendor has its own high availability solution and the answers are wildly different ranging from very complex to minimalist approaches. Your understanding of both the disaster prevention and disaster recovery methods for each system is critical. Just as a lawyer should never ask a question in court without knowing the answer already, you should never bring any virtual machine online without fully knowing the protection and recovery mechanisms in place. Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 7 Conclusion Because there are many variables to consider, selecting the correct hypervisor is not as simple as scanning a features table. However, in asking the questions we have discussed, I believe that your choice may be made clear far quicker than you might expect. Learn More Learn more about how you can improve productivity, enhance efficiency, and sharpen your competitive edge through training. Visit www.globalknowledge.com or call 1-800-COURSES (1-800-268-7737) to speak with a Global Knowledge training advisor. Basic Administration for Citrix XenApp 6 (CXA-204-2) Enterprise Virtualization Using Microsoft Hyper-V (M6422, M6331) VMware vSphere: Fast Track [V4.1] VMware vSphere: Install, Configure, Manage [V4.1] About the Author Andy Cummings is an MCT, MCTS, MCITP, MCSE, and CISSP. He is a contract trainer working with Global Knowledge since 2011. Andy has been consulting and training on the Microsoft BackOffice suite of applications since 1994. He currently teaches courses in Microsoft Hyper-V, Windows 7, SQL Server 2008, Exchange 2010, and Windows Server 2008 R2. You can connect with Andy on LinkedIn at linkedin.com/in/rocketbrain. He welcomes connection requests from his readers. Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 8
© Copyright 2026 Paperzz