An LPAR Review

Edit: Some links no longer work.

Originally posted September 2009 by IBM Systems Magazine

To learn more about this topic, read these articles:
Software License Core Counting
Trusted Logging Simplifies Security
Tools You Can Use: Planning and Memory
Improve Power Systems Server Performance With Virtual Processor Folding
Now’s the Time to Consider Live Partition Mobility
Improve Power Systems Server Performance With Enhanced Tools
How to Use rPerfs for Workload Migration and Server Consolidation
Entitlements and VPs- Why You Should Care
Three Lesser-Known PowerVM Features Deliver Uncommon Benefits

In 2006 IBMer Charlie Cler wrote a great article that helps clear up confusion regarding logical, virtual and physical CPUs on Power Systems (“Configuring Processor Resources for System p5 Shared-Processor Pool Micro-Partitions”). This subject still seems to be a difficult concept for some people to grasp, particularly those who are new to the platform or are unfamiliar with the topic. But if you put in the research, there are a lot of quality resources available.

I recently saw Charlie give a presentation to a customer where he covered this topic again, and I based this article on the information that he gave us that day, with his permission.

When you’re setting up LPARs on a hardware management console (HMC), you can choose to have dedicated CPUs for your LPAR, which means an LPAR exclusively uses a CPU; it isn’t sharing CPU cycles with any other LPAR on the frame. On POWER6 processor-based servers you can elect to have shared, dedicated processors–where the system allows excess processor cycles from a dedicated CPU’s LPAR to be donated to the shared processor pool.

Instead of using dedicated or shared dedicated CPUs, you could choose to let your LPAR take advantage of being part of a shared pool of CPUs. An LPAR operates in three modes when it uses a shared pool: guaranteed, borrowing and donating. When your LPAR is using its entitled capacity, it isn’t donating or borrowing from the shared pool. If it’s borrowing from the pool, then it’s going over its entitled capacity and using spare cycles another LPAR isn’t using. If the LPAR is donating, then it isn’t using all of its entitlement, but returning its cycles to the pool for other LPARs to use.

In his presentation, Cler shared some excellent learning points that I find useful:

  • The shared processor pool automatically uses all activated, non-dedicated cores. This means any capacity upgrade-on-demand CPUs that were physically installed in the frame but not activated wouldn’t be part of the pool. However, if a processor were marked as bad and removed from the pool, the machine would automatically activate one of the deactivated CPUs and add it to the pool.
  • The shared processor-pool size can change dynamically as dedicated LPARs start and stop. As you start more and more LPARs on your machine, the amount of available CPUs continues to decrease. Inversely, as you shut down LPARs, more CPUs become available.
  • Each virtual processor can represent 0.1 to 1 of a physical processor. For any given number of virtual processors (V), the range of processing units that the LPAR can utilize is 0.1 * V to V. So for one virtual processor, the range is 0.1 to 1, and for three virtual processors, it’s 0.3 to 3.
  • The number of virtual processors specified for an LPAR represents the maximum number of physical processors the LPAR can access. If your pool has 32 processors in it, but your LPAR only has four virtual CPUs and it’s uncapped, the most it’ll consume will be four CPUs.
  • You won’t share pooled processors until the number of virtual processors exceeds the size of the shared pool. If you have pool with two LPARs and four CPUs, and each LPAR had two virtual CPUs, there would be no benefit to sharing CPUs. As you start adding more LPARs and virtual CPUs to the shared pool, eventually you’ll have more virtual processors than physical processors. This is when borrowing and donating cycles based on LPAR activity comes into play.
  • One processing unit is equivalent to one core’s worth of compute cycles.
  • The specified processing unit is guaranteed to each LPAR no matter how busy the shared pool is.
  • The sum total of assigned processing units cannot exceed the size of the shared pool. This means you can never guarantee to deliver more than you have available; you can’t guarantee four CPUs worth of processing power if you only have three CPUs available.
  • Capped LPARs are limited to their processing-unit setting and can’t access extra cycles.
  • Uncapped LPARs have a weight factor, which is a share-based mechanism for the distribution of excess processor cycles. The higher the number, the better the chances the LPAR will get spare cycles; the lower the number, the less likely the LPAR will get spare cycles.

When you’re in the HMC and select the desired processing units, it establishes a guaranteed amount of processor cycles for each LPAR. When you set it to “Uncapped = Yes,” an LPAR can utilize excess cycles. If you set it to “Uncapped = No,” an LPAR is limited to the desired processing units. When you select your desired virtual processors, you establish an upper limit for an LPAR’s possible processor consumption.

Charlie gives an example of an LPAR with two virtual processors. This means the assigned processing units must be somewhere between 0.2 and 2. The maximum processing units the LPAR can utilize is two. If you want this LPAR to use more than two processing units worth of cycles, you need to add more virtual processors. If you add two more, then the assigned processing units must now be at least 0.4 and the maximum utilization is four processing units.

You need to consider peak processing requirements and the job stream (single or multi-threaded) when setting the desired number of virtual processors for your LPAR. If you have an LPAR with four virtual processors and a desired 1.6 processing units–and all four virtual processors have work to perform–each receives 0.4 processing units. The maximum processing units available to handle peak workload is four. Individual processes or threads may run slower, while workloads with a lot of processes or threads may run faster.

Compare that with the same LPAR that now has only two virtual processors instead of four, but still has a desired 1.6 processing units. If both virtual processors have work to be done, each will receive 0.8 processing units. The maximum processing units possible to handle peak workload is two. Again, Individual processes or threads may run faster, while workloads with a lot of processes or threads may run slower.

If there are excess processing units, LPARs with a higher desired virtual-processor count are able to access more excess processing units. Think of a sample LPAR with four virtual processors, desired 1.6 processing units and 5.8 processing units available in the shared pool. In this case, each virtual processor will receive 1.0 processing units from the 5.8 available. The maximum number of processing units that can be consumed is four, because there are four virtual processors. If the LPAR only has two virtual processors, each virtual processor will receive 1.0 processing units from the 5.8 available, and the maximum processing units that can be consumed is two, because we only have two virtual processors.

The minimum and maximum settings in the HMC have nothing to do with resource allocation during normal operation. Minimums and maximums are limits applied only when making a dynamic change to processing units or virtual processors using the HMC. The minimum setting also allows an LPAR to start with less than the desired resource allocations.

Another topic of importance Cler covered in his presentation is simultaneous multi-threading (SMT). According to the IBM Redbooks publication “AIX 5L Performance Tools Handbook (TIPS0434, http://www.redbooks.ibm.com/abstracts/tips0434.html?Open), “In simultaneous multi-threading (SMT), the processor fetches instructions from more than one thread. The basic concept of SMT is that no single process uses all processor execution units at the same time. The CPU design implements two-way SMT on each of the chip’s processor cores. Thus, each physical processor core is represented by two virtual processors.” Basically, one processor, either dedicated or virtual, will appear as two logical processors to the OS.

If SMT is on, AIX will dispatch two threads per processor. To the OS, it’s like doubling the number of processors. When “SMT = On,” logical processors are present, but when “SMT = Off,” there are no logical processors. SMT doesn’t improve system throughput on a lightly loaded system, and it doesn’t make a single thread run faster. However, SMT does improve system throughput on a heavily loaded system.

In a sample LPAR with a 16 CPU shared pool and SMT on, 1.2 Processing Units, three virtual processors and six logical processors: the LPAR is guaranteed 1.2 processing units at all times. If the LPAR isn’t busy, it will cede unused processing units to the shared pool. If the LPAR is busy, then you could set the LPAR to capped which would limit the LPAR to 1.2 processing units. Alternatively, uncapped would allow the LPAR to use up to three processing units, since it has three virtual processors.

To change the range of spare processing units that can be utilized, use the HMC to change desired virtual processors to a new value between the minimum and maximum settings. To change the guaranteed processing units, use the HMC to change desired processing units to a new value between the minimum and maximum settings.

When you think about processors, you need to think P-V-L (physical, virtual, logical). The physical CPUs are the hardware on the frame. The virtual CPUs are set up in the HMC when we decide how many virtual CPUs to give to an LPAR. The logical CPUs are visible and enabled when we turn on SMT.

When configuring an LPAR, Cler recommends setting the desired processing units to cover a major portion of the workload, then set desired virtual processors to match the peak workload. LPAR-CPU utilization greater than 100 percent is a good thing in a shared pool, as you’re using spare cycles. When you measure utilization, do it at the frame level so you can see what all of the LPARs are doing.

There’s a great deal to understand when it comes to Power Systems and the flexibility that you have when you set up LPARs. Without a clear understanding of how things relate to each other, it’s very easy to set things up incorrectly, which might result in performance that doesn’t meet your expectations. However, by using dynamic logical-partitioning operations, it can be easy to make changes to running LPARs, assuming you have good minimum and maximum values. As one of my colleagues says, “These machines are very forgiving, as long as we take a little care when we initially set them up.”

Other Resources

IBM developerWorks
Virtualization Concepts

IBM Redbooks publications
PowerVM Virtualization on IBM System p: Introduction and Configuration Fourth Edition” (SG24-7940-03)

IBM PowerVM Virtualization Managing and Monitoring” (SG24-7590)

IBM Systems Magazine articles
Mapping Virtualized Systems

Shared-Processor Pools Enable Consolidation