Edit: Still good stuff. Some links no longer work.
Move data without downtime using AIX
Originally posted April 2011 by IBM Systems Magazine
Organizations change storage vendors all the time, for many different reasons. Maybe a new storage product has come out with new features and functionality that will benefit the organization. Maybe the functionality isn’t new, but is unknown to your organization and someone decided it’s needed. Maybe a new storage vendor will include desired functionality in the base price. Maybe it’s a “political” decision. Maybe the equipment is just at the end of its life.
Whatever the reason, when it’s time to move from one storage subsystem to another, what are some options that you have to migrate your data using AIX? With ever-growing amounts of storage presented to our servers, and databases with sizes from several hundred GB to a few TB becoming more common, hopefully you’re not even considering something like a backup and restore from tape–along with all of the downtime that goes with it. Instead, you should focus on how to migrate data without downtime.
Evaluate the Environment
The first question I would ask how is your environment currently set up? Are you currently using virtual I/O (VIO) servers to present your logical unit numbers (LUNs) to the client LPARs in your environment using virtual SCSI or N_Port ID Virtualization (NPIV)? Are you presenting your LUNs to your LPARs using dedicated storage adapters? Take the time to go through different scenarios and look at the pros and cons of each. Call IBM support and get their opinion. Talk to your storage vendor. The more information you have, the better your decision will be. If possible, do test runs with test machines to ensure your procedures and planning will work as expected.
Possible Migration Solutions
If you’re using dedicated adapters in your LPARs to access your storage area network (SAN), it could be as simple as:
- Loading the necessary storage drivers
- Zoning the new LUNs from the new storage vendor to the existing host bus adapters (HBAs)
- Running cfgmgr so that AIX sees the new disks
- Adding your new disks to your existing volume groups with the extendvg command
- Running the mirrorvg command for your rootvg disks, and the migratepv command to move the data in your other volume groups from the old LUNs to the new LUNs
The trick here is making sure that any necessary multipath drivers that are needed will coexist together on the same LPAR. In some cases, you may not be able to find out whether your desired combination is even supported. It may be possible that no one has tried to mix your particular storage vendors’ code before. This might be a nice time to test things in your test environment.
A cleaner solution may be to use a new VIO server for your new disks. If you have the available hardware on your machine–which would consist of enough memory, CPU and an extra HBA to bring up the new VIO server–then it could be the ideal scenario. A new VIO server, with the new storage drivers, and the new LUNs being presented to your existing client LPARs using vSCSI may be your best bet. The advantage of this method is the storage drivers are being handled at the VIO server level instead of the client level, like they would be with NPIV. The disadvantage would be handling all of the disk mappings in the VIO server. I prefer to run NPIV and map disks directly to the clients’ virtual Fibre adapters, but again you could have the issue of mixing storage drivers so you would really need to test things before trying it on production LPARs.
If a new VIO server isn’t feasible for whatever reason, and you’re currently running with dual VIO servers and vSCSI, you should be able to remove the paths on your client LPARs that are coming from your secondary VIO server, then unmap the disks that are coming from your second VIO server. You can then remove the existing disks from your second VIO server, remove any multipath code and then repurpose it to see the new disks with the new code.
Clean Up
After the data has been migrated, you can go back and clean up the old disks and then zone the new disks to the secondary VIO server as well. Remember to correctly set up your no_reserve locks in the VIO servers and your hcheck_interval attributes on your clients for your new disks.
Chris Gibson has a great article that covers migration scenarios in more detail, which you can read on the developerWorks website.
While your data is migrating, you might want to watch what is happening with your disks. In some cases, such as with the mirrorvg command, you might not be able to get disk information and run logical volume manager (LVM) commands as your volume group is locked. While you can still run topas to watch your disk activity and see that data is being read from your source disk and written to your target disk, you might want to get more detailed information. In this case, look at the –L flag in the AIX logical volume manager, which Anthony English covers, also on developerWorks.
“On LVM list commands, the -L flag lets you view the information without waiting to obtain a lock on the volume group. So, if you come across the message, which tells you the volume group is locked, and you really can’t wait, you could use:lsvg -L -l datavg
The first -L doesn’t wait for a lock on the volume group. The second one is to list logical volumes. To list a single logical volume, such as lv00, use:lslv -L lv00
And to list physical volumes (PVs), which are almost always virtual:lspv -L hdisk3