Setting Up NPIV

Edit: This is still good stuff.

Originally posted May 3, 2011 on AIXchange

Following up on this recent post, I want to go into greater detail on setting up NPIV (N_port ID virtualization).

With most customers, the first question I get is, “Do I have the hardware to run NPIV?” If you’re running at least POWER6, you have IBM 8-GB fibre cards and your SAN switches are NPIV-capable, you should have what you need.

This document can help you determine if you’re set to use NPIV. If you log into your VIO server, run lsnports and find the value for “fabric” is 1, you’ll know you can safely map virtual adapters to your physical adapters. (Also remember to read the configuration document I referenced in the previous post.)

Setup is straight-forward. Create a virtual fibre adapter in your VIO server, then create a virtual fibre adapter in your VIO client. Map the virtual adapter in the VIO server to a physical fibre adapter using the vfcmap command and give the virtual worldwide name (WWN) to your SAN team.

Lately I’ve done a number of logical disk migrations for people who initially set up virtual SCSI and want to move to NPIV. Using dynamic LPAR, virtual fibre adapters can be added to the VIO server and client. The virtual adapter is mapped to the physical adapter and WWNs are obtained from the HMC. If you use Live Partition Mobility in your NPIV environment, remember that you’ll need to map both virtual WWNs, as both are used during the actual migration.

NPIV allows you some flexibility as far as using virtual adapters. I’ve seen some environments that have one adapter per VIO server, and others that map a virtual fibre adapter to every physical adapter in their VIO server. Some argue that one virtual adapter per VIO reduces complexity while providing sufficient redundancy. In many of these environments, the first virtual adapter is mapped to fcs0, the second to fcs1, etc. Whichever method you choose, I believe it’s important to test the set-up by rebooting the VIO servers. You need to verify that what you think will happen when you bring down a VIO server is what will actually happen.

I have customers that reuse the same LUN that they were using with vSCSI. In those cases, we unmounted the filesystems, varied off and exported the volume groups, used the rmdev command to remove the disk and the disk’s mappings from both VIO servers, changed the SAN zoning to map to the virtual WWN instead of the VIO servers’ physical WWN, ran cfgmgr in the client LPAR to see the disk directly in the client (importvg –y vgname hdiskX) and mounted the filesystems. It’s almost as if we never made any changes — though you need to be aware of any disk drivers or MPIO software that’s now needed in the client instead of the VIO server.

I also have customers that — rather than go through the downtime associated with remapping their disks – are fortunate enough (because they have enough storage) to just create new LUNs. They leave their original vSCSI mappings in place, map their new LUNs via NPIV directly to the client and just use migratepv to move the data from the old disks to the new disks. Then they remove the old vSCSI disks and mappings at their leisure.

One other thing to keep in mind once you complete the move to NPIV: Just because you no longer use vSCSI for your disks, you should still keep a vSCSI adapter on your VIO server and client for virtual optical devices. I know I still want the capability to use virtual .iso images as I always have.

So are you looking forward to an NPIV migration project? And if you’re already up and running, what’s your experience been like? Please share your thoughts in Comments.