To VIOS or Not to VIOS Revisited

Edit: Still worth considering.

Originally posted September 2014 by IBM Systems Magazine

In 2010, I wrote an article that covered the pros and cons of the virtual I/O server (VIOS). It’s still a topic that I run into today, especially as more IBM i customers consider attaching to SANs. In the article, I mentioned some of the concerns customers have, including their VIO server being a single point of failure, and the new skills that are required to administer the VIO server.

I want to reinforce the idea that you can build in redundancy when you design your VIO servers to reduce single points of failure. Some customers like to have dual VIO servers on each physical frame, but you can take it further than that. You can have one set of VIO servers to handle your storage I/O, and another pair to handle your network I/O. Some customers go one step further and segregate their production LPARs onto production VIO servers, and put their test/dev LPARs onto another set of VIO servers.

More Flexibility

You have a great deal of flexibility in how you configure and set up your Power Systems servers depending on the needs of your business.

IBM has made great strides in the usability of VIOS, especially for those uncomfortable with the command line. If you truly don’t want to log in as padmin and do your work from the shell, the Hardware Management Console (HMC) GUI gets better with each new release.

When you click on the Virtual Resources section of the HMC, you have access to Virtual Storage Management, Virtual Network Management and Reserved Storage Device Pool Management. Although these options have been around for a while, some don’t realize they exist or that ongoing improvements are been made to the interface and the choices that are available.

These options continue to become more powerful. For example, when I go into Virtual Network management, I can create a VSwitch, Modify a VSwitch, Sync a VSwitch and Set a VSwitch mode. I can view my existing VLANs and my shared Ethernet adapters.

Similarly, I can manage my storage through the Virtual Storage Management GUI. Modifying which hdisks are assigned to which LPAR and modifying virtual optical disk assignments to partitions can all be handled via the GUI.

I still prefer to use the VIO command line, and I still encourage you to learn how to do it as I think you have more power and control over the system using that method, but it’s becoming less mandatory to work as padmin than it used to be.

Easier Installation

Another powerful new tool is the capability to use the HMC GUI to actually install VIOS. Instead of fooling around with physical media or setting up your NIM server to allow you to load your VIOS, you can now manage a VIOS Image Repository on your HMC, where you store the VIO optical images on the hard drive of your HMC. I was pleasantly surprised when I was shipped a 7042-CR8 HMC with the HMC V8.8.1 code on it, the VIO install media was preinstalled on the HMC hard disk.

Loading that first VIO partition onto a new system was a snap. Once I got everything properly configured on the network and defined my VIO partition via the HMC, I was able to easily load multiple VIOS LPARs by clicking on the Install VIOS radio button and filling in a few network parameters in the GUI.

This is quite a change for people who are new to Power Systems servers, or those who don’t have NIM servers or don’t know how to use NIM servers. IBM i shops may never have a NIM server in their environments so that option isn’t even available for them.

When customers purchase some of the smaller Power Systems servers and opt to get a split backplane, it can be a challenge to get their second VIO server loaded as they can’t connect their DVD to their second disk controller. Allowing for installation from the HMC greatly simplifies the deployment of VIOS, especially in new environments. Preloading the necessary code only makes it that much easier.

More Alternatives

Another development that has arisen since I first wrote that article is the widespread adoption of NPIV, which gives admins an alternative to vSCSI. The advantage is that instead of being concerned with mapping LUNs from VIOS to client partitions, you can offload some of that heavy lifting to your SAN team. Now the SAN team is able to map LUNs directly to the client LPARs that will be using them. Some SAN teams don’t care for the extra burden. In one scenario that made the change, they had nearly a hundred LPARs on a frame, and they had been handling the vSCSI mappings at the VIOS level. This allowed the SAN team to map a great many LUNs to a relatively few WWNs. Once they migrated to NPIV, this burden shifted, and the SAN team was less than thrilled about it.

Comfort and Choice

The debate will continue, but the resistance seems to have lessened somewhat around the deployment of VIOS. As more shops get comfortable with the technology and more people spread the word, there is less fear around using this method to share adapters across many LPARs.

IBM continues to allow for choice in how you build your machines. I still know of customers that don’t virtualize anything and instead have dedicated CPUs and adapters for each LPAR. This type of a setup is becoming more rare as companies realize all of the benefits of virtualizing their environments using VIOS.