Edit: Some links no longer work
Originally posted June 20, 2018 on AIXchange
A few weeks ago I came across this great exchange in the AIX forum:
How do I determine the resources needed based on volume of transactions. By resources I mean, the cores, memory etc. Is there a way to arrive at that value?
The reply took the form of an analogy:
This question is about the same as “how much petrol does it take to go 100 miles”–without any specification of details it cannot be answered. In the above version: a bicycle would need no petrol at all, a car maybe 10 [liters] and a tank perhaps 200L of diesel. In your question: it depends on the transactions, the type of processor, the database used, the amount of memory, etc., etc….
In addition there are no fixed values for this, a lot of these estimations are done on experience. So, without you telling us more about your requirements we can’t answer your question, not even with a rough estimation.
As Nigel Griffiths notes in this IBM developerWorks post, basic common sense is a useful guide in these matters:
Trick 2: Don’t worry about the tea bags!
No one calculates the number of teabags they need per year. In my house, we just have some in reserve and monitor the use of tea bags and then purchase more when needed. Likewise, start with a sensible VIOS resources and monitor the situation.
Can this sort of thinking apply to our LPARs? Until we start running a given workload, we may not know how much memory and CPU we’ll ultimately need. Luckily, POWER-based systems are very forgiving in this regard. If some spare memory and CPU is available on our machines, we can (provided our profiles are set correctly) add or remove CPU and memory with a quick dynamic LPAR operation. As we monitor our workloads and tweak our resource allocations, we can arrive at a satisfactory answer with minimal effort.
Here’s the same AIX forum member making a similar analogy back in 2013:
A simple comparison of the difference between performance and speed can be described with this analogy: We have a Ferrari, a large truck, and a Land Rover. Which is fastest? Most people would say the Ferrari, because it can travel at over 300 [kilometers per hour]. But suppose you’re driving deep in the country with narrow, windy, bumpy roads? The Ferrari’s speed would be reduced to near zero. So, the Land Rover would be the fastest, as it can handle this terrain with relative ease, at near the 100kph limit. Right? But, suppose, then, that we have a 10-tonne truck which can travel at barely 60kph along these roads? If each of these vehicles are carrying cargo, it seems clear that the truck can carry many times more the cargo of the Ferrari and the Land Rover combined. So again: which is the “fastest”? It depends on the purpose (amount of cargo to transport) and environment (streets to go). This is the difference between “performance” and “speed.” The truck may be the slowest vehicle, but if delivering a lot of cargo is part of the goal it might still be the one finishing the task fastest.
So how do you determine the amount of resources you’ll need? As Nigel says in the previously referenced developerWorks post:
The classic consultant answer is “it depends on what you are doing with Disk & Network I/O” is not very useful to the practical guy that has to size a machine including the VIOS nor the person defining the VIOS partition to install it!
“Watch your workload and adjust as needed” may be wishy-washy advice, but the point is that real-world system workloads are difficult to simulate. While rPerfs and workload estimators can get you pretty far, you’ll inevitably need to make adjustments along the way. And as I said, this is yet another reason to appreciate AIX and IBM Power Systems. This combination is so easy to manage when it comes to adjusting resources and migrating workloads to different hardware as needed.