Edit: Still worth considering, and 96G is still pretty small.
Originally posted November 10, 2015 on AIXchange
This article examines the issues VMware and x86 customers face as they try to virtualize their environments:
Server virtualization has brought cost savings in the form of a reduced footprint and higher physical server efficiency along with the reduction of power consumption.
Obviously, we in the Power systems world can take this statement to heart. By reducing our physical server count and consolidating workloads, we can save on power and cooling and all of the other physical things we need for our systems (including network ports, SAN ports, cables, etc.).
A non-technical driver may be the workload’s size. If an application requires the equivalent amount of compute resources as your largest VM host, it would be cost prohibitive to virtualize the application. For instance, a large database server consumes 96 GB of RAM, and your largest physical VM host has 96 GB of RAM. The advantages of virtualization may not outweigh the cost of adding a hypervisor to the overhead of the workload.
One last non-technical barrier is political issues surrounding mission-critical apps. Even in today’s climate, there’s a perception by some that mission-critical applications require bare-metal hardware deployments.
I found this interesting since 96 GB of memory isn’t a lot on today’s Power servers. In addition, with the scaling in both memory and CPU, we can assign some very large workloads to our servers. Though the need to assign physical adapters exclusively to an LPAR is far less than it once was, we still have the option to use the VIO server for some workloads and physical adapters for others. Alternatively, we can use virtual for network and physical for SAN, or vice versa. With this flexibility, we can mix and match things as needed and make changes dynamically. It’s another advantage to running workloads on Power:
It would be unrealistic to think the abstraction that enables the benefits of virtualization doesn’t come at a cost. The hypervisor adds a layer of latency to each CPU and I/O transaction. The more intense the application performance requires, the more impact to the latency.
Since Power Systems are always virtualized, the hypervisor is always running on the system. The chips and the hypervisor are designed for virtualization. The same company designs the hardware, virtualization layer and the operating system. Everything works hand in hand. Even a single LPAR running on a Power frame runs the same hypervisor under the covers. We simply don’t see the kinds of performance penalties that VMware users do:
However, these direct access optimizations come at a cost. Enabling DirectPath I/O for Networking for a virtual machine disables advanced vSphere features such as vMotion. VMware is working on technologies that will enable direct hardware access without sacrificing features.
The same argument around Live Partition Mobility (LPM) could be made for Power systems that have been built with dedicated adapters. The nice thing is that on the fly we can change from physical adapters to virtualized adapters, run an LPM operation to move our workload to another physical frame, and then add physical adapters back into the LPAR. The flexibility we get with dynamic logical partitioning (DLPAR) operations allows us to add and remove memory, CPU, and physical and virtual adapters from our running machine.
As a quick aside, I expect to see even more blurring of the ways we virtualize our adapters as we continue to adopt SR-IOV:
SR-IOV allows multiple logical partitions (LPARs) to share a PCIe adapter with little or no run time involvement of a hypervisor or other virtualization intermediary. SR-IOV does not replace the existing virtualization capabilities that are offered as part of the IBM PowerVM offerings. Rather, SR-IOV compliments them with additional capabilities.
Getting back to the article on VMware and x86 customers, I was surprised by the conclusion. Most of my Power customers are able to virtualize a very high percentage of their workloads:
Complex workloads can challenge the desire to reach 100% virtualization within a data center. While VMware has closed the gap for the most demanding workloads, it may still prove impractical to virtualize some workloads.
Have you found the overhead associated with hypervisors a hindrance to virtualizing your most demanding workloads?
I’d like to pose these questions to you, my readers. How much of your workloads are virtualized? Do you even consider hypervisors or overhead when you think about deploying your workloads on Power?