Run IBM i and AIX in the Same Physical Frame

Edit: Some links no longer work.

POWER technology-based servers allow for consolidation

Originally posted December 2008 by IBM Systems Magazine

As I wrote in my blog titled “My Love Affair with IBM i and AIX”, I started my career working on AS/400 servers running OS/400 – and I loved it. Then I started working on AIX – and I loved that. AIX has been my world for the past decade.

During that time, AIX customers began using Virtual I/O Servers (VIOS) to consolidate the number of adapters needed on a machine. Instead of 10 standalone AIX servers they would have one or two frames and use pools of shared processors. But all these LPARs needed dedicated hardware adapters. So they consolidated again to share these resources. This required a VIOS with shared network adapters and shared Fibre adapters, which reduced the number of required physical adapters.

Now POWER technology-based servers have been consolidated and AIX and IBM i will run on the same machine, it makes sense to see what else can be shared. How can we take our current AIX and IBM i environments and run them on the same physical frame?

The IBM Technical University held this fall in Chicago offered sessions for AIX customers and for IBM i customers. If you were like me, you bounced between them. Great classes went over the pros and cons of situations where IBM i using VIOS as a solution may make sense. Although the idea of running IBM i as a client of VIOS might sound intimidating, it’s not. In years past, IBM i has hosted AIX and Linux partitions. Using VIOS is the same concept, only instead of your underlying operating system being IBM i-based, it’s VIOS, which is AIX-based.

Great documentation has been created to help us understand how to implement IBM i on VIOS. Some is written more specifically for those running IBM i on blades, but it’s applicable whether you’re on a blade or another Power Systems server. Many shops already have AIX skills in house, but if you don’t, it can be very cost-effective to hire a consultant to do your VIOS installation. Many customers already bring in consultants when they upgrade or install new hardware, so setting up VOIS can be something to add to the checklist. You can also opt to have IBM manufacturing preinstall VIOS on Power blades or Power servers.

Answering the Whys

Why would you want to use VIOS to host your disk in the first place? VIOS is able to see more disk subsystems than IBM i can see natively. As of Nov. 21, the IBM DS3400, DS4700, DS4800, DS8100 and DS8300 are all supported when running IBM i and VIOS, and I expect the number of supported disk subsystems to increase. You can also use a SAN Volume Controller (SVC) with VIOS, which lets you put many more storage subsystems behind it–including disk from IBM, EMC, Hitachi, Sun, HP, NetApp and more. This way you can leverage your existing storage-area network (SAN) environment and let IBM i connect to your SAN.

The question remains, why bother with VIOS in the first place? These open-system disk units are expecting to use 512 bytes per sector, while traditional IBM i disk units use 520 bytes per sector. By using VIOS, you’re presenting virtual SCSI disks to your client LPARs (vtscsi devices) that are 512 bytes per sector. IBM i’s virtual I/O driver can use 512 bytes per sector, while none of the current Fibre Channel, SAS or SCSI drivers for physical I/O adapters can (for now). IBM i storage management will expect to see 520 bytes per sector. To get around that, IBM i uses an extra sector for every 4 K memory page. The actual physical disk I/O is being handled by VIOS, which can talk 512 bytes per sector. This, in turn, allows you to widen the supported number of disk subsystems IBM i can use without forcing the disk subsystems to support 520 bytes per sector.

But again, why bother? It’s certainly possible you don’t need to implement this in your environment. If things are running fine, this makes no sense for you. This solution is another tool in the toolbox, and another method you can use to talk to disk. As alternative solutions are discussed in your business, and people are weighing the pros and cons of each, it’s good to know VIOS is an option.

Do you currently have a SAN or are you looking at one? Are you thinking about consolidating storage for the other servers in your environment? Are you considering blade technology? Are you interested in running your Windows, VMware, Linux, AIX and IBM i servers in the same BladeCenter chassis? If you have an existing SAN, or you’re thinking of getting one, it may make sense to connect your IBM i server to it. If you’re thinking of running IBM i on a blade, then you most certainly have to look at a SAN solution. These are all important ideas to consider, and you may find significant savings when you implement these new technologies.

VIOS

When I was first learning about VIOS, a friend of mine said this was the command I needed to share disk in VIOS: mkvdev -vdev hdisk1 -vadapter vhost1

When you think of an IBM i command, mkvdev (or make virtual device) makes perfect sense. I find that to be true of many AIX commands. You give the command the disk name (in this case an hdisk known to the machine as hdisk1) and the adapter to connect it to. On the IBM i client partition, a disk will appear that’s available for use just like any other disk.

To take it from the beginning, you’d have already set up your server and client virtual adapters, and your SAN administrator would zone the disks to your VIOS physical Fibre adapters. You’d log into VIOS as padmin, and after you run cfgdev in VIOS to make your new disks available, you can run lspv (list physical volume) and see a list of disks attached to VIOS.

In my case I see:lspv
NAME PVID VG STATUS
hdisk0 0000bb8a6b216a5d rootvg active
hdisk1 00004daa45e9f5d1 None
hdisk2 00004daa45ebbd54 None
hdisk3 00004daa45ffe3fd None
hdisk4 00004daa45ffe58b None
hdisk5 00004daae6192722 None

This might look like only one disk, hdisk0 in rootvg, is in use. However, if I run lsmap –vadapter vhost3 (lsmap could be thought of as list map, with the option asking it to show me the virtual adapter called vhost3), I’ll see:SVSA     Physloc                    Client Partition
ID
————— ——————————————– —
vhost3   U7998.61X.100BB8A-V1-C17   0x00000004

VTD      vtscsi3
Status   Available
LUN      0x8100000000000000
Backing device    hdisk5
Physloc  U78A5.001.WIH0A68-P1-C6-T2-W5005076801202FFF-L9000000000000

This tells me that hdisk5 is the backing device, and it’s mapped to vhost3, which in turn is mapped to client partition 4, which is the partition running IBM i on my machine.

To make this mapping, I needed to run the mkvdev command:mkvdev -vdev hdisk5 -vadapter vhost3

If I needed to assign more disks to the partition, I could’ve run more mkvdev commands. At this point, I use the disks just as I would any other disks in IBM i.

It might look like gibberish if this is your first exposure to VIOS. Your first inclination may be to avoid learning about it. Don’t dismiss it too quickly. IBM i now has another option when you’re setting up disk subsystems. The more you know about how it works, the better you’ll be able to discuss it.

Although I may find myself more heavily involved with AIX and VIOS, I still look back fondly at my first true love, and I’m glad it’s still getting options added that position it well for the future.

References

www.ibm.com/systems/resources/systems_power_hardware_blades_i_on_blade_readme.pdf

www.ibm.com/systems/resources/systems_i_os_i_virtualization_and_ds4000_readme.pdf

www.redbooks.ibm.com/abstracts/sg246455.html

www.redbooks.ibm.com/abstracts/sg246388.html

www.ibm.com/systems/storage/software/virtualization/svc