Edit: The link still works.
Originally posted February 7, 2012 on AIXchange
The just-published IBM Redpaper, “IBM PowerVM Getting Started Guide,” shows you how to use the Integrated Virtualization Manager (IVM), Hardware Management Console (HMC) and the Systems Director Management Console (SDMC) to configure your systems. It’s an extremely valuable guide that’s brief enough, at 104 pages, to be read quickly.
The chapters are independent, so they can be read in any order. I’ll run down some highlights in posts over the next two weeks:
Chapter 1
* There’s a great chart on page 2 that compares and contrasts the advantages and disadvantages for the IVM, HMC and SDMC.
* Section 1.2 covers planning:
“Be sure to check system firmware levels on your power server and HMC or SDMC before you start. Decide if you will use Logical Volume Mirroring (LVM) — in AIX LPARs — or Multipath IO (MPIO) at the VIOS level. Obviously if you are running NPIV you would want to run MPIO at the AIX level. The examples in this paper use MPIO. Make sure your Fibre Channel switches and adapters are N_Port ID Virtualization (NPIV) capable if you will be using NPIV.
Make sure your network is properly configured.
Check the firewall rules on the HMC or SDMC.
Plan how much processor and memory you will assign to the VIOS for best performance.”
* The authors recommend using a dual VIOS architecture — two VIO servers — to provide serviceability and scalability. So do I.
* Part of planning includes establishing a VIO slot number scheme. While the SDMC automates slot allocation, the authors illustrate their preferred scheme in Figure 1-2 on page 5.
The authors suggest a VIO slot numbering scheme where the server slot is 101, 102, 103, etc. in both VIO servers, and the client is 11, 12 connecting to VIO1, and 21 and 22 connecting to VIO2. When mapped, VIO1 would map 11 to 101, 12 to 102, and VIO2 would map 21 to 101, 22 to 102. I prefer a numbering scheme where my even-numbered adapters come from one VIOS (VIO1) and my odd-numbered adapters come from the other (VIO2), with both client and server using the same numbers. In my case I like 100, 110, 120, 130 coming from VIO1, and 101 111 121 131 coming from VIO2. Of course, you may have your own numbering scheme — which I’d love to hear about in Comments.
* Section 1.3 covers the terminology differences between Power- and x86-based systems, which can be handy for someone with little or no background managing power systems. This can help them make the transition in terminology between the two.
* Section 1.4 lists some prerequisites for setting up the machines:
“Check that:
- Your HMC or SDMC (the hardware or the virtual appliance) is configured, up, and running.
- Your HMC or SDMC is connected to the new server’s HMC port. We suggest either a private network or a direct cable connection.
- The TCP port 657 is open between the HMC/SDMC and the Virtual Server in order to enable Dynamic Logical Partition functionality.
- You have IP addresses properly assigned for the HMC, and SDMC.
- The Power Server is ready to power on.
All your equipment is connected to 802.3ad capable network switches with link aggregation enabled. Refer to the Chapter 5: Advanced Configuration on page 75 for more details.
Fibre Channel fabrics are redundant. Refer to Chapter 5: Advanced Configuration on page 75 for more details.
Ethernet network switches are redundant.
SAN storage for virtual servers (logical partitions) is ready to be provisioned.”
Chapter 5
The next three chapters are devoted to the specific approaches you might choose to take. Chapter 2 covers the IVM, Chapter 3 the HMC and Chapter 4 the SDMC. I’ll dissect those options next week. For now I’ll briefly discuss Chapter 5 (Advanced Configuration):
“This chapter describes additional configurations to a dual Virtual I/O Server (VIOS) setup and highlights other advanced configuration practices. The advanced setup addresses performance concerns over the single and dual VIOS setup.
This chapter includes the following sections:
- Adapter ID numbering scheme
- Partition numbering
- VIOS partition and system redundancy
- Advanced VIOS network setup
- Advanced storage connectivity
- Shared processor pools
- Live Partition Mobility
- Active Memory Sharing
- Active Memory Deduplication
- Shared storage pools”
* Table 5-1 illustrates an example of virtual SCSI adapter ID allocations.
* Section 5.4 covers advanced VIOS network setup, including link aggregation and VLAN tagging:
“The VIOS partition is not restricted to only one SEA adapter. It can host multiple SEA adapters where:
A company security policy may advise a separation of VLANs so that one SEA adapter will host secure networks and another SEA adapter will host unsecure networks.
A company may advise a separation of production, testing, and development networks connecting to specific SEA adapter configurations.
“There are considerations regarding the use of IEEE 802.3ad Link Aggregation, 802.1Q VLAN tagging, and SEA:
There is a maximum of 8 active ports and 8 standby ports in an 802.3ad Link Aggregation device.
Each of the links in a 803.3ad Link Aggregation device should have their speeds set to a common speed setting. For example, set all links to 1g/Full duplex.
A virtual Ethernet adapter is capable of supporting up to 20 VLANS (including the Port Virtual LAN ID – PVID).
A maximum of 16 virtual Ethernet adapters with 20 VLANS assigned to each adapter can be associated to an SEA adapter.
A maximum of 256 virtual Ethernet adapters can be assigned to a single virtual server, including the VIOS partitions.
The IEEE 802.1Q standard supports a maximum of 4096 VLANS. SEA failover is not supported in IVM as it only supports a single VIOS partition.”
Whether you set up IBM Power Systems all the time or you’re just getting started with the platform, this Redpaper is an excellent resource for learning or reviewing the relevant technology and terminology.