Edit: Some links no longer work
Originally posted January 31, 2012 on AIXchange
Back in 2010 I wrote about the changes that were coming to VIOS. One of those big changes, shared storage pools, is now a reality. This gives admins another option to consider when setting up disks on Power servers.
In larger companies, disk changes are typically implemented by SAN teams with many other responsibilities and, often, different priorities. However, by allocating storage to the servers up front and setting it up in a storage pool, admins can manage shared storage pools. In doing so, we can be more responsive to requirement changes. And with thin provisioning, we can determine the amount disk we actually use on each server. For the first time since the days of internal disks and expansion drawers, disk is back under our control.
Here’s how Nigel Griffiths explains shared storage pools:
“The basic idea behind this technology… is that [VIO servers] across machines can be clustered together and allocate disk blocks from large LUNs assigned to all of them rather than having to do this at the SAN storage level. This uses the vSCSI interface rather than the pass through NPIV method. It also reduces SAN admin required for Live Partition Mobility — you get the LUN available on all the VIOS and they organise access from there on. It also makes cloning LPARs, disk snapshots and rapid provisioning possible. Plus thin provisioning — i.e., disk blocks — are added as and when required, thus saving lots of disk space.”
Continuing from last week, here’s more from Nigel’s presentation.
Since shared storage pools are built on top of cluster-aware AIX, the lscluster command also provides more information, including: lscluster –c (configuration), lscluster –d (list all hdisks), lscluster –i (network interfaces), lscluster –s (network stats).
In the demo, he also discusses adding disk space and assigning it to client VMs. Keep in mind that you cannot remove a LUN from the pool. You can replace a LUN but you can’t remove one.
He also covers thin and thick provisioning using shared storage pools and shows you how to conduct monitoring. Run topas on your VIOS and then enter D (make sure it’s upper-case) so you can watch the disk I/O get spread across your disks in 64 MB chunks. From there, Nigel covers how to set up alerts on your disk pool. If you’re using thin provisioning, you must ensure you don’t run out of space.
Nigel also shares his script, called lspool. It’s designed to do the work of multiple scripts by presenting all of the critical information at one time instead of running multiple commands:
# lspool list each cluster and for each list its pools and pool details
~/.profile
clusters=`cluster -list | sed ‘1d’ | awk -F ” ” ‘{ printf $1 ” ” }’`
echo “Cluster list: ” $clusters
for clust in $clusters
do
pools=`lssp -clustername $clust | sed ‘1d’ | awk -F ” ” ‘{ printf $1 ” ” }’`
echo Pools in $clust are: $pools
for pool in $pools
do
lssp -clustername $clust | sed ‘1d’ | grep $pool | read p size free
totalLU numLUs junk
let freepc=100*$free/$size
let used=$size-$free
let usedpc=100*$used/$size
echo $pool Pool-Size: $size MB
echo $pool Pool-Free: $free MB Percent Free $freepc
echo $pool Pool-Used: $used MB Percent Used $usedpc
echo $pool Allocated: $totalLU MB for $numLUs Logical Units
alert -list -clustername $clust -spname $pool | sed ‘1d’ | grep $pool
| read p poolid percent
echo $pool Alert-Percent: $percent
if [[ $totalLU > $size ]]
then
let over=$totalLU-$size
echo $pool OverCommitted: yes by $over MB
else
echo $pool OverCommitted: no
fi
done
done
Nigel examines snapshots and cloning with shared storage pools, noting that the different commands — snapshot –create, snapshot –delete, snapshot –rollback and snapshot –list — use different syntax. Sometimes it asks for a –spname flag, other times it asks for a –sp flag. Pay attention so you know the flags that are needed with the commands you’re running. He also demonstrates how some of this management can be handled using the HMC GUI.
The viosbr command is also covered. I discussed it here.
Nigel recommends that you get started by asking the SAN team to hand over a few TB that you can use for testing. Also make sure your POWER6 and POWER7 servers are at the latest VIOS 2.2 level. It’s worth the effort. This technology will save time, boost efficiency and increase your overall responsiveness to users.
Finally, here’s Nigel’s shared storage pools cheat sheet:
1. chdev -dev -attr reserve_policy=no_reserve
2. cluster -create -clustername galaxy -repopvs hdisk2
-spname atlantic -sppvs hdisk3 hdisk5 -hostname bluevios1.ibm.com
3. cluster –list
4. cluster -status -clustername galaxy
5. cluster –addnode –clustername galaxy –hostname redvios1.ibm.com
6. cluster -rmnode [-f] -clustername galaxy -hostname redvios1.ibm.com
7. cluster –delete –clustername galaxy
8. lscluster –s or –d or –c or –i = CAA command
9. chsp –add –clustername galaxy -sp atlantic hdisk8 hdisk9
10. chsp -replace -clustername galaxy -sp atlantic -oldpv hdisk4 -newpv hdisk24
11. mkbdsp -clustername galaxy -sp atlantic 16G
-bd vdisk_red6a -vadapter vhost2 [-thick]
12. rmbdsp -clustername galaxy -sp atlantic -bd vdisk_red6a
13. lssp -clustername galaxy -sp atlantic -bd
14. lssp -clustername galaxy
15. alert -set -clustername galaxy –spname atlantic -value 80
16. alert -list -clustername galaxy -spname atlantic
17. errlog –ls
18. snapshot -create name -clustername galaxy -spname atlantic -lu LUs
19. snapshot -delete name -clustername galaxy -spname atlantic -lu LUs
20. snapshot -rollback name -clustername galaxy -spname atlantic -lu LUs
21. snapshot –list -clustername galaxy -spname atlantic
22. viosbr -backup -clustername galaxy -file Daily -frequency daily -numfiles 10
23. viosbr -view -file File -clustername Name …
24. viosbr -restore -clustername Name …
25. lsmap -clustername galaxy –all
Take the time to listen to the replay, and you’ll learn even more. I highly recommend it.