Getting Started With NPIV

Edit: The link still works. This is still a good comparison.

Originally posted April 19, 2011 on AIXchange

NPIV isn’t new functionality, but plenty of customers are only just now getting started with it. I know this because lately, I’m hearing a lot about NPIV. In response to the numerous queries coming my way, I searched and found this excellent IBM Support document on configuring NPIV:

“N_Port ID Virtualization or NPIV is a Fibre Channel facility allowing multiple N_Port IDs to share a single physical N_Port. This allows multiple Fibre Channel initiators to occupy a single physical port,
easing hardware requirements in Storage Area Network design. An NPIV-capable fibre channel HBA can have multiple N_Port IDs, each with a unique identity and world wide port name.”

Compared to using virtual SCSI devices (vSCSI), storage management is greatly simplified with NPIV. NPIV allows AIX admins to zone a LUN to a particular client LPAR directly, rather than use VIOS as a middleman. So with NPIV and SEA, the VIO servers handle the shared Ethernet and NPIV duties. Best of all, there’s no need to map and track LUNs — that duty can be left with the SAN team where it belongs.

In contrast, when using vSCSI with VIO servers, your lsmap –all output can be a mess to manage if a large number of LUNs are being mapped through your VIOS to client LPARs. I’ve seen servers with hundreds of LUNs being presented to the VIOS. In those cases, the AIX admins must manage the subsets of LUNs that are then mapped to individual VIO clients. All that disk-mapping must be tracked, and I’ve seen many different spreadsheets and documents that attempt to do this.

In a typical scenario, two VIO servers will be set up (so that one can be serviced or restarted without these activities impacting the client LPARs). A fibre card or two is usually attached to each VIO server. Then the SAN team can zone the VIO servers to the SAN using the World Wide Name (WWN) information from the physical adapters. This results in a pile of LUNs that AIX admins must map to the appropriate VIO clients. To make all of the LUNs accessible from both VIO servers, each LUN’s no reserve attribute must be set. So the admins end up doing the mappings twice, once on each VIO server.

On top of that, admins must pay attention to PVIDs or LUN IDs to ensure that the disk that’s mapped on VIOS1 is the same one mapped on VIOS2. Having the no reserve attribute set on the disk can open up a potential disaster if the same LUN is accidentally mapped to different clients. If two different clients are booting from the same LUN, it’s time to look for a mksysb and do a restore.

One plus with vSCSI is that MPIO software only needs to be loaded on the VIO server. The VIO clients usually just use the built-in AIX MPIO software as they have no visibility to the disks other than recognizing that they’re virtual SCSI disks.

From this lengthy explanation on vSCSI, you might have already figured that NPIV, once you have it set up, is much easier to use. And you’re correct. With NPIV, virtual WWN information is created for each client LPAR. The SAN team gives LUNs to the client LPARs directly. Virtual fibre adapters must still be mapped to a particular physical fibre card in the VIO server, but admins don’t need to map and track LUNs or worry about reserve locks on the LUNs. (We do, however, need to remember to load the MPIO software into the client LPARs, because the clients do recognize the disks and the storage subsystems from which they come.)

I’ll have more NPIV info next week, so stay tuned.