Edit: I still love Redbooks. Part 2.
Originally posted August 21, 2012 on AIXchange
As I said last week, the “IBM PowerVM Best Practices” Redbook has a lot of valuable information. This week I’ll cover the final three chapters of this publication.
Chapter 5 notes
Storage, with virtual SCSI and virtual Fibre Channel, are covered. The authors also address the issue of whether to boot from internal or external disk:
“The best practice for booting a [VIO server] is using internal disks rather than external SAN storage. Below is a list of reasons for booting from internal disks:
* The [VIOS] does not require specific multipathing software to support the internal booting disks. This helps when performing maintenance, migration, and update tasks.
* The [VIOS] does not have to share Fibre Channel adapters with virtual I/O clients, which helps in the event a Fibre Channel adapter replacement is required.
* If virtual I/O clients have issues with virtual SCSI disks presented by the [VIOS] backed by SAN storage, the troubleshooting can be performed from the [VIOS].”
Virtual SCSI and NPIV can be mixed within the same virtual I/O client. Booting devices or rootvg can be mapped via virtual SCSI adapters; data volumes can be mapped via NPIV (section 5.1.3). The pros and cons of mixing NPIV and virtual SCSI are illustrated in table 5-1.
A chdev should be run on all fibre devices (section 5.1.4):
$ chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes
fscsi0 changed
“Changing the fc_err_recov attribute to fast_fail will fail any I/Os immediately if the adapter detects a link event, such as a lost link between a storage device and a switch. The fast_fail setting is only recommended for dual [VIOS] configurations. Setting the dyntrk attribute to yes allows the [VIOS] to tolerate cable changes in the SAN.
The authors recommend exporting disk devices backed by SAN storage as physical volumes. In environments with a limited number of disks, storage pools should be created to manage storage from the VIOS (section 5.2.1).
Virtual adapter considerations and naming conventions are covered in section 5.2.2. The pros and cons of using logical volumes for disk mappings versus mapping entire disks are considered in section 5.2.3. This section also tells us:
“Virtual tape devices are assigned and operated similarly to virtual optical devices. Only one virtual I/O client can have access at a time. It is a best practice to have such devices attached to a [VIOS], instead of moving the physical parent adapter to a single client partition.
“When internal tapes and optical devices are physically located on the same controller as the [VIO server’s] boot disks, it is a best practice to map them to a virtual host adapter. Then, use dynamic logical partitioning to assign this virtual host adapter to a client partition.”
Section 5.2.4 covers configuring the VIOC with Virtual SCSI and lists some recommended tuning options for AIX. Sections 5.3 and 5.4 cover shared storage pools and NPIV, respectively:
“NPIV is now the preferred method of providing virtual storage to virtual I/O clients whenever a SAN infrastructure is available. The main advantage for selecting NPIV, compared to virtual SCSI, is that the [VIOS] is only used a pass through to the virtual I/O client virtual Fibre Channel adapters. Therefore, the storage is mapped directly to the virtual I/O client, with storage allocation managed in the SAN. This simplifies storage mapping at the [VIOS].”
Chapter 6 and 7 notes
Chapter 6 covers performance monitoring, highlighting tools and commands that enable both short- and long-term performance monitoring.
Chapter 7 covers security and advanced PowerVM features, including default open ports on the VIOS like FTP, SSH, telnet, rpcbind and RMC. The authors recommend disabling FTP and telnet if they’re not needed (section 7.1.2). Active memory sharing and active memory duplication are covered in sections 7.4 and 7.4.3.
PowerSC and Live Partition Mobility are covered in sections 7.2 and 7.3. LPM storage considerations are listed in section 7.3.3:
“* When configuring virtual SCSI, the storage must be zoned to both source and target [VIO servers]. Also, only SAN disks are supported in LPM.
* When using NPIV, confirm that both WWPNs on the virtual Fibre Channel adapters are zoned.
* Dedicated I/O adapters must be deallocated before migration. Optical devices in the [VIOS] must not be assigned to the virtual I/O clients that will be moved.
* When using virtual SCSI adapters, verify that the reserve attributes on the physical volumes are the same for the source, and destination [VIO servers].
* When using virtual SCSI, before you move a virtual I/O client, you can specify a new name for the virtual target device (VTD) if you want to preserve the same naming convention on the target frame. After you move the virtual I/O client, the VTD assumes the new name on the target [VIOS]. …”
Section 7.3.4 lists LPM network considerations:
“* Shared Ethernet Adapters (SEA) must be used in a Live Partition Mobility environment.
* Source and target frames must be on same subnet to bridge the same ethernet network that the mobile partitions use.
* The network throughput is important. The higher the throughput, the less time it will take to perform the LPM operation. For example, if we are performing an LPM operation on a virtual I/O client with
8 GB of memory:
– A 100 MB network, sustaining a 30 MB/s throughput, takes 36 minutes to complete the LPM operation.
– A 1 GB network, sustaining a 300 MB/s throughput, takes 3.6 minutes to complete the LPM operation.”