More Fun with AIX on a Nutanix Cluster

Edit: The cluster was fun to play with

Originally posted November 13, 2018 on AIXchange

I recently had another hands-on experience with a Nutanix Cluster.

This system consisted of four CS821 nodes. After previously doing an install with the cloud-ready deployment method, I wanted to try an .iso installation as well as installing from NIM. Those are the big three when it comes to installing AIX on Hyperconverged systems.

The first step is to create a VM. Nutanix has an image library that’s much like the virtual media repository on a VIO server in PowerVM. Populating this library with IBM-provided AIX .iso files turned out to be as simple as this:

  • I logged into Prism, opened “image configuration” and selected “upload image.”
  • I named the image (AIX_7200-03-01-1838_flash.iso was the latest available as of this writing), changed the image type to ISO.
  • Then I chose a storage container for the image and provided the image source.

That last one is a nice touch, by the way. Rather than download to your machine and then upload to the cluster and use that as your source, you can provide a URL and Nutanix will download the file directly from the source for you. I selected the correct .iso image from the IBM Entitled Software Support (ESS) site, and rather than using the download director, I selected the “click here to use http” option. This provided the link from IBM’s site to the .iso image to feed to Nutanix.

With my image on the server, I was ready to boot from it. At last check, these files were available from ESS:

  • ISO, AIX v7.2 Install DVD 1 (TL 7200-03-00-1837 9/2018)
  • ISO, AIX V7.2 Install DVD 2 (TL 7200-03-00-1837 9/2018)
  • ISO, AIX v7.2 Install flash (TL 7200-03-01-1838 9/2018)
  • GZ, AIX v7.2 Cloudrdy Virtual Mach Image TL 7200-03-01-1838, (9/2018)

Since DVD 1 is a space-saving .ZIP file, I initially downloaded that. It turns out though the system can’t process .ZIPs, so I instead went with the install flash .iso image. Of course I could have downloaded DVD 1 to my workstation, done the unzip there and then uploaded it, but that would be self-defeating. The idea is to download directly from IBM.

To continue testing, I created a test virtual machine and gave it CPU and memory. Then when I got down to the disks, I selected the virtual CD, told it I wanted to clone from the image service, gave it my AIX v7.2 install flash .iso image, and clicked on update. I added an additional virtual disk to be my hdisk0 in AIX, added in a virtual NIC, and saved the profile.

At this point I powered on my VM and got two options for consoles: a VNC and a COM1. The VNC console allows you to interact with OpenFirmware; COM1 is a traditional serial console.

One thing I’ve yet to figure out is how to display LED codes in the VM table display in Prism. But that just gives me more to look forward to as I continue working with these clusters.

Anyway, my VNC console showed that the VM had booted, while my COM1 console was blank. I entered 1 and my console started to display LED codes. I soon got to my familiar screen where I was prompted to press 1 to install in English.

There was my normal base operating system install and maintenance screen where I could press 1 (to start install with default settings) or 2 (to change/show install settings and install). I entered 2, and wouldn’t you know, it couldn’t detect the Nutanix disk I’d assigned to install the OS.

Luckily support was aware of this issue and had a procedure ready. I needed to go back into the previous welcome to base operating system installation and maintenance screen and follow these instructions:

3 Start Maintenance Mode for System Recovery
3 Access Advanced Maintenance Functions
>>> 0 Enter the Limited Function Maintenance Shell
$ cfgmgr (errors are expected – many devices are not yet available to be configured)
$ exit
99 (Return to previous menu)
5 Select Storage Adapters
>>> 1 scsi0      qemu_vhost-user-scsi-pci:0000:00:02.0
2 Change/Show Installation Settings and Install
1 Disk(s) where you want to install ……
1 hdisk0    qemu_vhost-user-scsi-pci:0000:00:02.0-LW_0
>>> 0  Continue with choices indicated above

After doing this, the disk I’d assigned to the VM appeared and I was able to install AIX to it as expected. Interestingly, I was getting LED codes to my console during the install, but otherwise everything looked the same as any other AIX install from .iso.

Once I got AIX installed, I went ahead and set it up as a NIM server, as I also wanted to test network boot. This too went as expected. The main difference came in how the client is booted from the NIM server. I followed these directions, and after I’d configured my NIM server and created a VM to attempt to boot from it, I powered it on and opened a VNC console. As found in the instructions, here’s the necessary syntax:

To boot the client from the network install manager (NIM) master at the OpenFirmware prompt, use the following command template:
0> boot <NIC-device>:<nim-server-ip>,<\path\to\client\bootfile>,<clientip>,<gateway-ip>

Further in the document, there’s an example:

The following commands boot the client VM from the network install manager (NIM) master at the OpenFirmware prompt:
0> boot net:9.3.94.78,\tftpboot\client-vm.ibm.com,9.3.94.217,9.3.94.1

This worked as expected, and I was able to boot over the network. Unless you have a flat network, I recommend having your NIM server on the Nutanix cluster you’re booting from. As the document states:

“If you are using a static IP address for the client virtual machine (VM), the client and server must be on the same subnet when booting a VM across the network. You cannot specify a subnet mask in the boot command as shown in the following example.”

I took a mksysb to my NIM server and installed a different VM from the mksysb image. Again, everything worked exactly as expected.

One small annoyance was that the COM1 consoles wouldn’t survive power off/power on of the virtual machine, although you could probably get around that by logging into a controller VM and opening a console that way.

As I learn more I’ll be sure to share it. Feel free to tell me about any Nutanix cluster specifics you’d like to read about.