Edit: Always love learning about new things
Originally posted November 24, 2020 on AIXchange
Learn about the modernization of N_Port ID Virtualization by enabling NPIV Multiple-Queues.
On Twitter, Chris Gibson cited this IBM Knowledge Center document on NPIV multiple queue support.
Learn about the modernization of N_Port ID Virtualization (NPIV) by enabling multiple-queues, which is commonly known as NPIV Multiple-Queue (MQ).
Currently, Fibre Channel (FC) adapters with high bandwidth, such as 16 GB or 32 GB FC adapters support multiple-queue pairs for storage I/O communication. Multiple-queue pairs in the physical FC stack significantly improve the input/output requests per second (IOPS) due to the ability to drive the I/Os in parallel through the FC adapter. The objective of the NPIV Multiple-Queue is to add similar Multiple-Queue support to all components such as the client operating system (OS), POWER® Hypervisor (PHYP), and the Virtual I/O Server (VIOS). The NPIV VIOS stack and the PHYP are updated to allow client LPARs to access multiple-queues. The Multiple-Queue feature is supported only on AIX® client logical partitions and on VIOS Version 3.1.2, or later.
NPIV scaling improvements through Multiple-Queue provides the following benefits:
- Efficient utilization of available Multiple-Queue FC adapters bandwidth when mapped to a single or multiple LPARs.
- Enable and drive multiple logical units (LUN) level I/O traffic in parallel through FC adapter queues.
- Storage I/O performance improvement due to increased input/output requests per second (IOPS).
Toward the end of the doc, there’s information about client tunable attributes:
The number of queues that the NPIV client uses depends on several factors such as FC adapter, FW level, VIOS level, and tunable attributes of the VFC host adapter. During the initial configuration, the VFC client negotiates the number of queues with the VFC host and configures the minimum value of num_io_queues attribute and the number of queues that are reported by the VFC host.
After the initial configuration, the negotiated number is the maximum number of channels that the VFC client can enable. If the VFC host renegotiates more channels after operations (such as remap, VIOS restart, and so on), the number of channels remains the same as the initially negotiated number. However, if the VFC host renegotiates with fewer channels, the VFC client reduces its configured channels to this new lower number.
For example, if the initial negotiated number of channels between the VFC client and VFC host is 8, and later if the VFC host renegotiates the number of channels as 16, the VFC client continues to run with 8 channels. If the VFC host renegotiates the number of channels as 4 channels, the VFC client adjusts its number of configured channels to 4. However, if the VFC host renegotiates the number of channels as 8 channels, which result in increasing the number of configured channels to 8, the VFC client must be reconfigured to renegotiate the number of channels from the client side.
This doc has much more information than I’ve included here, so be sure to read the whole thing. Also take a moment to view Chris’s actual tweet, which includes an image showing output from the lsdev –dev viosnpiv0 –attr command; viosnpiv0 is a new pseudo device you’ll find on your vio server.