Edit: This is still a slick way to handle disk.
Originally posted May 29, 2012 on AIXchange
I wrote about shared storage pools (here and here) back in January. Recently, I had an opportunity to implement one with a customer.
We had two 720 servers, each of which had two VIO servers. We upgraded to the latest VIOS code, making sure our HMC and firmware were at current levels. Then we presented the LUNs from our SAN, following the steps outlined in my January posts.
First we made sure all the LUNs were set to no reserve.
chdev -dev -attr reserve_policy=no_reserve
Then we created the cluster. While I’m giving the names referenced in Nigel Griffiths’ presentation (see my first post linked above), for the record, we used our own names.
cluster -create -clustername galaxy -repopvs hdisk2
-spname atlantic -sppvs hdisk3 hdisk5 -hostname bluevios1.ibm.com
With that accomplished, I could see that we had a cluster.
cluster –list
cluster -status -clustername galaxy
In our cluster, we used one 20G repository LUN and two 500G LUNs for our data.
The cluster –create command took a few minutes to run. On our first try we didn’t have fully qualified hostnames on our VIO servers, so we got an error when we tried to create the cluster. Changing the names was easy enough, and after that, the cluster was successfully created.
We ran the cluster –list and cluster –status commands again and got the output we expected. Then from the same node we ran the cluster –addnode command to add a second VIOS to our cluster.
cluster –addnode –clustername galaxy –hostname redvios1.ibm.com
It took about a minute to add that node, and it was successful. We ran cluster –status again to confirm that the second VIOS was added.
One thing I liked about the process is that the output provides the node name, machine type and model information. This way it’s easy to see determine which physical server is running the command.
We did the same procedure for the next two VIO servers. This took a bit longer, likely because they were on another physical server. Still, at the end of the procedure the cluster -status command displayed all four VIO servers in the cluster. When we logged into each of the other VIO servers and ran cluster –status, we saw the same output.
(Note: Running lspv won’t tell you that the disks in your storage pool are in use, but the lspv -free command will give you this confirmation. This could be an issue if you were mapping the entire hdisk to a client LPAR — i.e., the “old” way. But because you’re not actually mapping hdisks directly, this isn’t necessarily a problem.)
To create a new vdisk to map to our client LPAR, we ran:
mkbdsp -clustername galaxy -sp atlantic 16G-bd vdisk_red6a -vadapter vhost2
Once we had our disk created and mapped, we ran:
lssp –clustername galaxy –sp atlantic –bd
That showed us that vdisk_red6a was in the cluster.
Then we ran this command to map it in vios2:
mkbdsp -clustername galaxy -sp atlantic -bd vdisk_red6a -vadapter vhost2
If you compare the command that creates the vdisk to the one that maps the vdisk to the client LPAR, the only difference is the size you provide. Someone can tell me if there’s an easier way to do it. For my own amusement I tried using the old mkvdev command. It didn’t work.
When we ran lsmap –all, we could see the same vdisk presented to the client, going from both VIO servers.
We then wanted to try live partition mobility using shared storage pools. This posed some problems, but searching on the error message we encountered (HSCLA24E) turned up this entry:
“This week we were trying to migrate some VIO hosted LPARs using XIV disk from one POWER7 system to another. The disk is hosted on a VIO server via the fabric, then using VSCSI devices to map up to the servers. Unfortunately the migration failed and the message we got was HSCLA24E: The migrating partition’s virtual SCSI adapter 2 cannot be hosted by the existing virtual I/O server (VIOS) partition on the destination managed system. To migrate the partition, set up the necessary VIOS hosts on the destination managed system, then try the operation again.
“So we did some searching and found the following:
“HSCLA24E error:
1) On the source VIOS partition, do not set the adapter as required and do not select any client partition can connect when you create a virtual SCSI adapter (can cause the error code).
2) Max transfer size of the used hdisk may not be different on source and destination VIOS.
3) The VIO servers may not have set the reserve_policy to single_path, no_reserve is required.
4) Destination VIO servers are not able to see the disks the client needs.
5) The same VTD (virtual target devices) names may not exist on the destination system.”
In our case we addressed no. 1 by unselecting the “any client can connect” option and mapping to the specific client we were using. With these changes, we could successfully migrate the LPAR.
In the course of changing the adapters, we rebooted the VIO servers. Be patient when rebooting. It seems to take some time for the servers to restart and join the cluster. You’ll know it’s ready when the cluster –status command changes from “state down” to “state OK.” (We joked that you only have to give it until “despair + 1.”)
Also, be sure to run df and check your /var/vio/SSP/’clustername’ filesystem that gets created on all the members of your cluster. That was a quick and dirty way for us to determine that our status was about to change to OK. As the cluster comes online, and as you run cluster –status, you’ll see the filesystems mount and the status change from down to OK.
This initial build-out of shared storage pools offers some advantages. For starters, there are fewer, larger LUNs to present and zone to the VIO servers. With larger LUNs being carved up in the pool, there’s less to manage with no reserves and the mkvdev commands. Of course some would argue that this advantage is offset by the need to run mkbpsp commands on both VIO servers.
It’s also nice being able to login to one cluster node, create a vdisk and see that new vdisk show up on all four nodes, rather than having to login to each VIO server separately. This just feels like a cleaner disk management solution.
As I continue to work with shared storage pools, I’m sure I’ll have more lessons to pass along. If you’ve been using this technology, please share your thoughts in Comments.