Document Examines IBM i External Storage Options

Edit: In this piece I address a future reader, who would now be a past reader.

Originally posted February 11, 2014 on AIXchange

As I’ve previously noted, while this blog is focused on AIX, I think it’s worthwhile to occasionally discuss the IBM i operating system. Increasingly, AIX pros are asked to support their IBM i counterparts as they connect to external storage for the first time.

For many years IBM i systems only used internal disks. And I expect that some IBM i environments will continue to rely exclusively on internal disk for years to come. After all, managing your machine is easy when you’re in total control of the environment from disks to server. However, things have changed. These days IBM i is commonly used in shared environments (like SANs), and of course this is where adding disks becomes tricky.

While the VIO server is common to both AIX and IBM i, those of us on the AIX side have been using it for years. In contrast, many IBM i pros have little to no experience with VIOS, and thus find it difficult to pick up. If you’re an IBM i administrator in this situation, you may find this document helpful.

“Hints and Tips: V7000 in an IBM i Environment” examines external storage options. The authors are Alison Pate, IBM Advanced Technical Sales Support, and Jana Jamsek, IBM Advanced Technical Skills, Europe. The document was most recently revised in August 2013. (I like to note the date because readers will often find years-old posts on this blog that reference documentation that’s likely been updated over time. So if you see this in, say, 2017, first, glad you’re here, Future Reader, and second, be sure you track down the latest version of Alison and Jana’s work.)

For instance, this section lays out the challenges of attaching IBM i to SAN disks without VIOS:

            Translation from 520 byte blocks to 512 byte blocks

            “IBM i disks have a block size of 520 bytes. Most fixed block (FB) storage devices are formatted with a block size of 512 bytes so a translation or mapping is required to attach these to IBM i. (The DS8000 supports IBM i with a native disk format of 520 bytes).

            “IBM i performs the following change of the data layout to support 512 byte blocks (sectors) in   external storage: for every page (8 * 520 byte sectors) it uses additional 9th sector; it stores the 8-byte headers of the 520 byte sectors in the 9th sector, and therefore changes the previous 8* 520-byte blocks to 9* 512-byte blocks. The data that was previously stored in 8 * sectors is now spread across 9 * sectors, so the required disk capacity on V7000 is 9/8 of the IBM i usable capacity. Vice versa, the usable capacity in IBM i is 8/9 of the allocated capacity in V7000.

            “Therefore, when attaching a Storwize V7000 to IBM i, whether through vSCSI, NPIV or native attachment this mapping of 520:512 byte blocks means that you will have a capacity ‘overhead’ of being able to use only 8/9ths of the effective capacity.

            “The impact of this translation to IBM i disk performance is negligible.”

The document also identifies the requirements for and potential issues with using vSCSI or NPIV. One section looks at sizing for performance and the need to consider I/O as well as capacity. The authors recommend getting a Disk Magic model to determine what’s best for your environment. They suggest starting with 80G LUN sizes, noting, “the recommendation is to create a dedicated storage pool for IBM i with enough managed disks backed by a sufficient number of spindles to handle the expected IBM i workload. Modeling with Disk Magic using actual customer performance data should be performed to size the storage system properly.”

IBM Mulitpath is another topic of discussion:

            “With using the recommended switch zoning we achieve that four paths are established from a LUN to the IBM i: two of the paths go through adapter 1 (in NPIV also through VIOS 1) and two of the paths go through adapter 2 (in NPIV also through VIOS 2); from the two paths that go through each adapter one goes through the preferred node, and one goes through the non-preferred node. Therefore two of the four paths are active, each of them going through different  adapter, and different VIOS if NPIV is used; two of the path are passive, each of them going through different adapter, and different VIOS if NPIV is used. IBM i Multipathing uses Round Robin algorithm to balance the IO among the paths that are active.”

In addition, the document includes good graphics that further help explain the concepts being discussed.

There’s much more than I can cover here, so be sure to check it out. Though the document is IBM i specific, I believe this information is relevant for IBM i and AIX admins alike.