Edit: The links still work.
Originally posted February 14, 2012 on AIXchange
Continuing from last week, here’s more on the recently released IBM Redpaper, “IBM PowerVM Getting Started Guide.”
Chapter 2: IVM
From the authors:
“IBM developed the Integrated Virtualization Manager (IVM) as a server management solution that performs a subset of the HMC and SDMC features for a single server, avoiding the need for a dedicated HMC or SDMC server. IVM manages a single stand-alone server — a second server managed by IVM has its own instance of IVM installed. With the subset of HMC and SDMC server functionality, IVM provides a solution that enables the administrator to quickly set up a server. IVM is integrated within the Virtual I/O Server product, which services I/O, memory, and processor virtualization in IBM Power Systems.
“There are many environments that need small partitioned systems, either for test reasons or for specific requirements, for which the HMC and SDMC solutions are not ideal. A sample situation is where there are small partitioned systems that cannot share a common HMC or SDMC because they are in multiple locations.
“IVM is a simplified hardware management solution that inherits most of the HMC features. It manages a single server, avoiding the need for an independent personal computer. It is designed to provide a solution that enables the administrator to reduce system setup time and to make hardware management easier, at a lower cost.
“When not using either the HMC or the SDMC, VIOS takes control of all the hardware resources. There is no need to create a specific partition for the VIOS. When VIOS is installed using the default settings, it installs on the server’s first internal disk controller and onto the first disk on that controller. IVM is part of VIOS and activated when VIOS is installed without an HMC or SDMC.”
Chapter 2 continues with details on IVM installation.
I wish this chapter would include screen shots. (There are screen shots in chapters 3-4.) The Redpaper describes the steps, but for those unfamiliar with the interface it might be confusing. Some screen shots could help.
Chapter 3: HMC
More from the authors:
“Note: There is flexibility for you to plan your own adapter numbering scheme. The Maximum virtual adapters setting needs to be set in the Virtual Adapters window to allow for your numbering scheme. The maximum setting is 65535 but the higher the setting, the more memory the managed system reserves to manage the adapters.”
They cover the three VIOS installation methods: DVD, via the HMC (using the installios command) and via Network Installation Manager (NIM). One of the notes says:
“Interface en5 is the SEA adapter created in 3 on page 29. Alternatively, an additional virtual adapter may be created for the VIOS remote connection, or another physical adapter may be used (it will need to be cabled) for the TCP/IP remote connection. TCP and UDP port 657 must be open between the HMC and the VIOS. This is a requirement for DLPAR (using RMC protocol).”
I know when I set up shared Ethernet adapters on VIO servers, I like to add an additional virtual Ethernet adapter to put my VIO IP address on. This allows me to perform maintenance on my VIOS and SEA without an outage, as the network traffic goes out my backup SEA on my other VIOS.
Section 3.2 covers setting up dual VIO servers:
“The benefit of a dual VIOS setup is that it promotes Redundancy, Accessibility and Serviceability (RAS). It also offers load balancing capabilities for MPIO and for multi SEA configuration setups. The differences between a single and dual VIOS setup are:
The additional VIOS partition
The additional virtual Ethernet adapter used as the SEA Control Channel adapter per VIOS
Setting the trunk priority on the virtual Ethernet adapters used for bridging to physical adapters in an SEA configuration.”
The authors explain how to move from a single VIO SEA to a dual VIO scenario by adding the control channel adapter using this command:
chdev -dev ent5 -attr ctl_chan=ent6 ha_mode=auto
They also mention that we can run commands on the VIO command line or use cfgassist, which is similar to smitty in AIX.
Section 3.3 covers setting up virtual fibre. The authors argue that virtual SCSI disks be used for rootvg and NPIV be used for data LUNs:
“Virtual Fibre Channel allows disks to be assigned directly to the client partitions from the SAN storage system. With virtual SCSI, the disks are assigned to the VIOS partition before they are mapped to a virtual SCSI adapter.
“The preference is to still use virtual SCSI for client partition operating system disk, and use virtual Fibre Channel for the data. The reasons for using virtual SCSI for client partition operating system disks are:
* When the disks are assigned VIOS first, they can be checked before having them mapped to a client. Whereas using virtual Fibre Channel this cannot be determined until the client partition is loaded from an installation source.
* Operating systems such as AIX and Linux have their kernels running in memory. If serious SAN issues are being experienced, the VIOS will first detect the problem and sever the link to the client partition. The client partition will halt abruptly reducing any risk to data corruption. With operating systems using virtual Fibre Channel or physical Fibre Channel, the partition will remain running for a period. During that period the client partition is susceptible to data corruption.
* Operating system disks using virtual SCSI are not reliant on external device drivers whereas operating system disks using virtual Fibre Channel are. When it comes to upgrading the external device drivers, the client partitions would need to follow special procedures to upgrade.”
Chapter 4: SDMC
From the authors:
“The IBM Systems Director Management Console (SDMC) provides system administrators the ability to manage IBM Power System servers as well as IBM Power Blade servers. The SDMC organizes tasks in a single panel that simplifies views of systems and day-to-tay tasks. The SDMC is also designed to be integrated into the administrative framework of IBM Systems Director.
“The SDMC can automatically handle the slot allocation of virtual adapters for the user. With the SDMC the user can choose to either let the SDMC manage the slot allocations, or use the traditional manual mechanism to allocate the virtual adapter IDs.”
According to section 4.1.4, setting up an SEA failover configuration is a simple GUI operation when using SDMC:
Select the primary VIOS, the physical adapter you want to use and the backup VIO and its physical adapter. Then hit OK:
“The SDMC automatically creates the SEA adapters on both VIOS1 and VIOS2. The SDMC will also configure the control channel as a part of this step. The virtual Ethernet adapter with the highest VLAN ID is used for the SEA control channel.”
This should remove the possibility of errors arising from setting up the control channels manually.
Although you can do the same with the HMC GUI, I still prefer to manage things on the command line.
The publication has much more. It’s well worth your time.