It’s Time to Upgrade to VIOS 3.x

Edit: Hopefully you have upgraded by now.

Originally posted July 28, 2020 on AIXchange

VIOS 2.6 is going out of support—and every prior 2.x version is already out of support. It’s time to upgrade to 3.x.

VIOS 2.6 is going out of support—and every prior 2.x version is already out of support. It’s time to upgrade to 3.x.

Nigel Griffiths’s comprehensive look at VIOS 3.1 features good information about upgrading your VIO servers. You’ll also find presentation slides and links to IBM documentation.

On that note, you should be aware of potential issues surrounding specific VIOS versions when you do an update or a 2.x to 3.x upgrade:

HIPER APAR IJ25390

USERS AFFECTED:
Systems running the AIX 7200-04-02-2015 or 7200-04-02-2016 Technology Level or VIOS 3.1.1.20 or 3.1.1.21 with devices.vdevice.IBM.l-lan.rte below the 7.2.4.3 level.

ERROR DESCRIPTION:
The Virtual Ethernet Adapter can cause a significant degradation in TCP performance for largesend packets.

This issue can be seen with VIOS (serving any AIX®, IBM i, or Linux client LPAR) running the affected VIOS levels below. It can also be seen with any AIX client LPAR running the affected AIX levels below (regardless of the VIOS level). This issue only occurs when the client LPARs are configured for largesend traffic or VIOS is configured for largereceive traffic.

RECOMMENDATION:
Install APAR IJ25390.
Prior to fix availability, an interim fix is available from either:

ftp://aix.software.ibm.com/aix/ifixes/ij25390/
https://aix.software.ibm.com/aix/ifixes/ij25390/

The iFix can be installed using Live Update (LU). If LU is not used, installation of the iFix requires a reboot.

Local fix
Disable PLSO:

Either disable mtu_bypass:
# chdev -l enX -a mtu_bypass=off

or disable PLSO on the VEA or trunk VEA:
# chdev -l entX -a platform_lso=no

Thanks to Chris Gibson, who gave me a heads up about this issue some weeks back.

Why I Enjoy Working Remotely

Edit: I still prefer working from home.

Originally posted July 21, 2020 on AIXchange

The technology, tips and tricks Rob McNelly has gained under his sleeve while working remotely all these years.

Unlike most people, I haven’t had to adapt to working at home. That’s because I’ve worked at home for years.

For me, working at home was an easy choice. I love being able to get going first thing in the morning while I’m fresh and alert. On weekdays at least, I’ll typically awaken thinking of work anyway. So rather than sitting through a commute, worrying about losing time, I can get right to it. I suppose being an introvert plays a part as well. Being around others all day can be draining; I use my alone time to recharge.

For others who may have never previously worked remotely—particularly extroverts who are energized by daily interactions—I can see where this experience would be a struggle. Of course some people simply weren’t set up to work from home. Until a few months ago lots of folks never had the need for a laptop, or for superior cell phone connections and network connectivity, outside of the office.

In my case I further benefit from the fact that my children are grown and my dedicated office workspace has long been in place. I have fast internet, a full-size multi-montior setup and my cherished old school tools: an actual landline and a vintage Model M keyboard.

Of course, 2020 has still been an adjustment for me. In a typical year, I’ll travel to work at client sites as required. If the budget allows it, I’ll also attend a couple of conferences.

From these experiences I do understand the unique value of face to face interactions in business settings. Developing relationships online is certainly doable. I have numerous friends that I only know in the virtual world. However, something can be lost when interactions take place exclusively via email, phone, IM, Webex, or what have you. Taking time for meals, chit chat in the halls, and less formal interactions are often the glue that holds relationships together with those you only see infrequently. For this reason, I’ve always tried to meet with remote team members. I believe that being able to put faces with names and get to know one another better is well worth the effort and expense.

And as we’ve seen, there’s still essential work that can only be done onsite. Data centers still need hands on attention, new hardware still needs to be installed in racks, cables must be pulled, parts must be replaced, etc.

So at this point, I am ready to go back on the road. But I’ll always appreciate being able to work remotely.

My Lucky Number

Edit: I did not make it 13 more years.

Originally posted July 14, 2020 on AIXchange

While Rob McNelly understands the thinking behind decisions to archive content, he can’t help but dwell on how much we’ve lost over the years as archives across the web have gone away.

Thirteen years ago this week, my first AIXchange post went live for IBM Systems Magazine.
 
You may notice that the above link takes you to RobMcNelly.com, my personal website and archive. Early this year, around the same time IBM developerWorks shut down, IBM Systems magazine redesigned its website.
 
The decision was made to scale back the magazine archives, in part because the numbers showed few views for most of the old articles. Though I understand the thinking behind these decisions—as well as the ever-changing nature of the web itself—I can’t help but dwell on how much we’ve lost over the years as archives across the web have gone away.
 
In my case, I decided to re-home my content, which includes almost a decade’s worth of AIXchange posts from July 2007 through December 2016. And in case you’re wondering, it’s no small task; I’m still in the process of loading my archives.
 
On that note, I’m continually amazed by the number of broken links I find in my old articles. I typically reference related, more detailed technical information, and these supporting docs also get moved around or removed from the web. While I update these links whenever I can, a lot of it cannot be recovered. Incidentally this is why I generally quote from sources I reference; if the supporting link goes away, at least you have an idea of what was written.
 
As forward-thinking as the world of tech is, I do feel a little nostalgic for the words and ideas that have disappeared from the web. And I wonder how often we’re forced to reinvent wheels and solve problems that were already solved because the solutions put forth in old but still relevant articles are no longer online. Sure, things I wrote two, five or 13 years ago may not be read much these days, but I believe they’re still worth preserving. Collectively these short weekly posts represent part of my life’s work, so for that reason alone archiving is worth the effort. Plus, as I’ve mentioned previously, sometimes when trying to resolve a system issue I’ll find the answer in one of my old posts. Seriously, it’s happened more times than I can count.
 
The AIXchange anniversary is important to me. I use the occasion not just to look back but to look ahead. Wherever you are in your career, take a minute to imagine where you plan to be, and envision what computing might look like then. Me? My plan is to keep writing, linking, and archiving. My hope is I’ll still be doing this 13 years from now, and that you’ll still be reading.

When Looking for Answers, Don’t Discount Older Docs

Edit: The loss of old information is tragic.

Originally posted July 7, 2020 on AIXchange

While a lot of information is dated, it’s still relevant. Old documentation and archives actually hold answers to a lot of our current questions.

A while ago I was asked about implementing system accounting software:

“The system accounting utility allows you to collect and report on individual and group use of various system resources.

“The accounting system utility allows you to collect and report on individual, group, and Workload Manager (WLM) class use of various system resources.

“This accounting information can be used to bill users for the system resources they utilize, and to monitor selected aspects of the system operation. To assist with billing, the accounting system provides the resource-usage totals defined by members of the adm group, and, if the chargefee command is included, factors in the billing fee.

“The accounting system also provides data to assess the adequacy of current resource assignments, set resource limits and quotas, forecast future needs, and order supplies for printers and other devices.”

Along with that introduction from IBM Knowledge Center, check out this primer, this IBM Redbook, this discussion thread on AIX® auditing, and this lengthy doc on monitoring user activity.

Yes, a lot of that information is quite dated, but it’s still relevant to this topic. Old documentation and archives actually hold answers to a lot of our current questions. Personally I can’t count the number of times I stumbled onto some old post that had precisely the information I was searching for.

A Literal Icon of Computing

Edit: I love reading about this stuff.

Originally posted June 30, 2020 on AIXchange

Rob McNelly makes a toast to early GUI decisions and designs, and the people behind them.

As a fan of computing history—as well as someone who played with a Macintosh in his college computer lab in the late 1980s—I found this 2019 article on Susan Kare, designer of the first Macintosh icons, to be pretty entertaining.

If it wasn’t for needlepoint, the computer graphics we have come to know and love today might have looked a lot different. Pioneering designer Susan Kare was taught by her mother how to do counted-thread embroidery, which gave her the basic knowledge she needed to create the first icons for the Apple Macintosh 35 years ago.

“It just so happened that I had small black and white grids to work with,” she says. “The process reminded me of working needlepoint, knitting patterns or mosaics. I was lucky to have had a mother who enjoyed crafts.”

Kare’s breakthrough designs for the Macintosh, which included the smiling computer at startup, trash can for recycling and a computer disk for saving files, are now commonplace in the digital era. They are so recognizable that they are legendary.

Although Macintosh wasn’t the first GUI, most would argue it was the first time mainstream users used a GUI. Like a lot of you who are my age, I was already comfortable with command line interfaces, so learning to use a mouse was interesting. Single click? Double Click? Drag and drop? If I recall, there were tutorials and programs to help us along. On that note, Susan Kare also designed the card deck for Microsoft’s Solitaire program, which was intended to be a learning aid for the computer mouse.

Here’s a bit more from the Kare article. Be sure to read the whole thing:

Kare devised various ideas and concepts to translate basic commands and procedures into visual cues for users. Thus emerged the trash can, computer disk and document with turned-up page corner—all of which are, in one form or fashion, omnipresent icons for computer functions.

Using graphics on computers was not new but Apple wanted to demystify the operating system so average people would understand intuitively what they needed to do. Early computers tended to be complicated behemoths that were developed for mathematically inclined scientists and engineers.

Honestly, it’s hard to even remember what using a computer was like before the Macintosh. How else would you try to visualize your interface? How could you differentiate the array of functions and file systems to simplify the desktop experience for non-technical users? Can you remember marking up documents in WordPerfect, and then transitioning to Microsoft Word? I remember the first time I used WYSIWYG publishing software with a laser printer; the results were astounding. 

It may be taken for granted, but we owe quite a bit to those early decisions and designs, and the people behind them.

Videos are Another AIX Learning Tool

Edit: Have you watched these?

Originally posted June 23, 2020 on AIXchange

Get to know the series of YouTube videos from Nigel Griffiths that he calls AIX in Focus.

You may be interested in this series of YouTube videos from Nigel Griffiths that he calls AIX in Focus.

I love his description:
AIX the best OS in the world. Fast, built to make the best use of POWER based computer with:

  • 192 CPU cores and 8192 core threads
  • 32 TB of memory
  • 64 Gen 4 PCIe adapter slots 

For the system admin guys the commands to make life simple and flexible are second to none and commands are stable—not changing the tools and command every couple of years.

There is also a plenty of hot open source tools recompiled for AIX the tookbox. Plus AIX is the birthplace of nmon and now njmon performance stats tools pulling data from the excellent “perfstat” library.

I choose my bank based on them hosting my bank account on AIX 🙂

This series covers topics like the LVM, smitty, JFS2, JFS2 snapshots, JFS2 with NO log, NIM servers, and much more.

Nigel has made many other vids as well. Check them out on his YouTube main page, and get ready to do a ton of learning.

I suppose videos aren’t for everyone, but I find them very valuable. Not that I rely on any one educational/training platform; I still read plenty, including IBM Redbooks and other documents and articles. Of course nothing beats actual machine time, but beyond that, there’s truly no one way to learn.

The Value and Challenges of a Spare Disk

Edit: Hopefully this will help someone if they run across the same issue.

Originally posted June 16, 2020 on AIXchange

Having a spare internal disk is handy in general but that doesn’t come without navigating some challenges. Rob McNelly gives the breakdown on both.

When I build a new system, I prefer to have the VIO servers run from internal disks. Sure it’s slower, but it’s a trade-off I’m willing to make. This way I know if I lose the SAN, I can still boot VIOS to conduct some troubleshooting.
 
If the customer is willing, I like to have some spare SAS disks available in the frame as well. That takes the urgency out of replacing a failed disk. We just immediately migrate our data to the currently inactive disk we have available, and then swap out the bad one at our leisure after IBM ships the new disk.
 
Having a spare internal disk is pretty handy in general. During the early stages of racking and stacking systems, while the SAN team is creating and mapping LUNs, an available disk can be used to load a test LPAR or be a place to stage a NIM server, among many other things.
 
For instance, during a recent POWER9 model S922 install, I created a temporary NIM server on an internal disk that I mapped to a client LPAR. My plan was to migrate it to a SAN LUN once they were available. While I’ve done this many times without a hitch, in this instance, running extendvg rootvg hdisk1 (where hdisk1 is my new SAN LUN) triggered this error:
 
#extendvg rootvg hdisk1
0516-1980 extendvg: Block size of all disks in the volume group must be the same. Cannot mix disks with different block sizes. 0516-792 extendvg: Unable to extend volume group.
 
A search for the message brought up this IBM Support doc:
 
“Such an error means you cannot mix a physical volume (PV) of 4 KB block size with PV blocks of other sizes. The block size of all PVs in the volume group must be the same.
 
“This is explicitly mentioned in the man page for the extendvg command.
 
“Unfortunately, there is no way to fix that issue, because AIX, at the present, only supports 4K block sizes on sissas drives. AIX does not support 4K block sizes on fibre-attached storage. There is no option for block-size translation for SAS disks in firmware nor in AIX kernel.
 
“Naturally, that means that you will not be able to include both sissas drive and fibre-attached drives within the same volume group, since the volume group requires all disks within the volume group to utilize the same disk block size.”
 
I was left with a few choices. I could backup my system to a mksysb and restore it from a NIM server using that mksysb. Of course that was problematic in this case, since this was the NIM server I was trying to move from internal disk in the first place. The better option was to bail myself out using the alt_disk_copy command.
  
“The alt_disk_copy command allows users to copy the current rootvg to an alternate disk and to update the operating system to the next maintenance or technology level, without taking the machine down for an extended period of time and mitigating outage risk. This can be done by creating a copy of the current rootvg on an alternate disk and simultaneously applying software updates. If needed, the bootlist command can be run after the new disk has been booted, and the bootlist can be changed to boot back to the older maintenance or technology level of the operating system.”
 
I wasn’t looking to update the software; I only wanted to copy rootvg to my new disk. I ran:
 
            #alt_disk_copy  -B -V -d hdisk1
 
After running a bosboot, I modified my bootlist so it pointed to the new disk and rebooted from the SAN LUN. Once this was done and I was satisfied everything was running as expected on the new LUN, I cleaned up the original rootvg by running:
 
#alt_rootvg_op -X old_rootvg
 
Sometimes in these situations I don’t immediately consider the power we have with AIX®  and IBM Power Systems™ hardware. With alt_disk_copy, I don’t always have to do backup/restore operations, or even migratepv. Of course it may require a reboot, but in this instance it was well worth the time spent.

Resolving Issues Running migratepv

Edit: Have you run into this issue?

Originally posted June 9, 2020 on AIXchange

What happens when you try to use migratepv to move rootvg to the new LUN and remove the copy of rootvg from the original LUN?

My client added a LUN to an LPAR on the fly using cfgmgr. When we tried to use migratepv to move rootvg to the new LUN and remove the copy of rootvg from the original LUN, we got this error:

0516-1259 mirrorvg: None of the available disks support booting on this specific system or the boot logical volume already exists on the specified disk. If these are new bootable disks and you have not rebooted since configuring them. You may try exporting LVM_HOTSWAP_BOOTDISK=1 and run this command again to override this condition. 0516-1200 mirrorvg: Failed to mirror the volume group.

A web search turned up the following.

On POWER machine systems, a disk is found to be bootable by the system firmware, at IPL time. Each LPAR runs its own private version of the firmware, which probes for all available hardware for that partition, and builds a device tree in NVRAM. When AIX® boots it reads this device tree and runs cfgmgr to configure the devices in AIX.

If a disk is added AFTER boot time, it has usually not been configured correctly by the firmware to allow AIX to see that it is bootable.

Resolving The Problem
1) Run the command below:
# export LVM_HOTSWAP_BOOTDISK=1
# bootinfo -B hdisk1

Then, please confirm the output is 1 then process to mirror the root volume group again.

2) [Restart] the firmware by shutting down the LPAR and powering it off, then reactivating it and rebooting will allow the firmware to probe the adapter and disks again to and mark them as bootable.

If the same issue persists, please contact the IBM AIX Software Support Team.

We tried exporting the LVM_HOTSWAP_BOOTDISK=1 and running the mirrorvg, but it failed again. But once we shut off the LPAR and restarted it, it worked as expected.

Disclaimer: I like sharing these tips with you, the reader, but my future self also appreciates having these out there. That’s because it’s entirely possible I’ll run into a similar dilemma down the line.

For the Love of Information

Edit: Did you already know about this?

Originally posted June 2, 2020 on AIXchange

A concise, easy to understand explanation of MTUs.

While I wasn’t obsessively desiring to learn about MTUs, I genuinely appreciate how this brief blog post clearly and thoroughly covers an obscure topic:

“The MTU (Maximum Transmission Unit) states how big a single packet can be. Generally speaking, when you are talking to devices on your own LAN the MTU will be around 1500 bytes and the internet runs almost universally on 1500 as well. However, this does not mean that these link layer technologies can’t transmit bigger packets.

“For example, 802.11 (better known as WiFi) has a MTU of 2304 bytes, or if your network is using FDDI then you have a MTU around 4352 bytes. Ethernet itself has the concept of ‘jumbo frames,’ where the MTU can be set up to 9000 bytes (on supporting NICs, Switches and Routers).

“However, almost none of this matters on the internet. Since the backbone of the internet is now mostly made up of Ethernet links, the de facto maximum size of a packet is now unofficially set to 1500 bytes to avoid packets being fragmented down links.

“On the face of it 1500 is a weird number, we would normally expect a lot of constants in computing to be based around mathematical constants, like powers of 2. 1500, however fits none of those.

“So where did 1500 come from, and why are we still using it? The ‘Ethernet: Distributed Packet Switching for Local Computer Networks’ paper from 1980 is an early note of the efficiency cost analysis of larger packets on a network. This being especially important to Ethernet at the time, since Ethernet networks would ether be sharing the same coax cable between all systems, or there would be Ethernet hubs that would only allow one packet at a time to be transmitted around all members of the Ethernet segment.

“A number had to be picked that would mean that transmission latency on these shared (sometimes busy) segments would not be too high, but also that packet header overhead would not be too much…. It would seem at best that the engineers at the time picked 1500 bytes, or around 12000 bits as the best ‘safe’ value.

“Since then various other transmission systems have come and gone, but the lowest MTU value of them has still been ethernet at 1500 bytes. Going bigger than lowest MTU on a network will either result in IP fragmentation, or the need to do path MTU detection. Both of which have their own sets of problems. Even if sometimes large OS vendors dropped the default MTU to even lower at times.”

Such a concise, easy to understand explanation. Read the whole thing.

AIX 6.1 Running on POWER9? It Can Be Done.

Edit: To be clear the lppsource and spot were from the new AIX level, you do not use a spot from the mksysb in order to get this method to work. And it still works great with 7.1 and 7.2 if you need to have a more recent version of AIX to get it to run on POWER9.

Originally posted May 26, 2020 on AIXchange

The system software map comes in handy when a customer that was running AIX 6.1 on a POWER5 server wanted to migrate their LPAR to POWER9.

Recently a customer that was running AIX 6.1 on a POWER5 server wanted to migrate their LPAR to POWER9*. They were running 6100-09-09-1717 and rather than perform an update on the source LPAR, they preferred to take a mksysb and simply run it on their POWER9 system.

They consulted the system software map to learn the minimum levels needed to run AIX 6.1 on POWER9:

Technology Level: 6100-09
Base Level: 6100-09-11
Recommended Level: 6100-09-12-1846
Latest Level: 6100-09-12-1846

So how could we take their mksysb from 6100-09-09-1717 to the recommended/latest level? I reached out to IBMer Chris Gibson, who said they could migrate their mksysb on the fly.

You can restore directly the mksysb to the latest TL, if you have a nim server use the command below:

nim -o bos_inst -a source=mksysb -a lpp_source=<lpp_source> -a spot=<SPOT> -a mksysb=<mksysb> -a image_data= mksysb_image_data -a accept_licenses=yes client_hostname

My customer wisely has extended support from IBM for AIX 6.1. This provides access to download media from Entitled Systems Support (ESS).

The last level available for AIX 6.1 base media was 6100-09-12-1837, so they downloaded that and made it available on their NIM server as the lppsource and the spot. Then they ran the command, which restored and then updated their mksysb on the fly. Once it booted up, we were able to download the latest fixes and update it to 6100-09-12-1846.

It’s quite a testament to AIX that an OS version that GAed in 2007 can run on the latest hardware—not to mention that the whole migration/updating process can be done on the fly.

You might not run into this sort of thing every day, but if you ever need to do something similar, hopefully you’ll recall this post.

More Than Ever, Spare Computing Power is Needed

Edit: Did any of you contribute spare cycles?

Originally posted May 19, 2020 on AIXchange

Alternatives to running SETI@home.

Many years ago as a young-ish administrator, I ran SETI@home. There were clients for different operating systems, including AIX. It was easy to see how different processors affected the speed of completing work units.
 
As of March 30, SETI@home no longer sends data for clients to work on, but if you still enjoy using your spare machines, or have considered setting up a spare machine for distributed computing like this, they suggest some alternatives:
 
“… we encourage you to continue donate computing power to science research—in particular, research on the COVID-19 virus. The best way to do this is to join Science United and check the Biology and Medicine box.”
 
As noted on the Science United site:
 
“BOINC is the preeminent platform for volunteer computing (VC). It is used by most VC projects, including SETI@home, Einstein@home, Climateprediction.net, IBM World Community Grid, and Rosetta@home.
 
“Science United is a new way to participate in BOINC. With Science United, computer owners volunteer for science areas rather than for specific projects. Science United assigns computers to appropriate projects; these assignments may change over time.
 
“We call this the coordinated model for VC. It has the advantage that new projects can get computing power without having to do their own publicity and volunteer recruitment. The goal is to provide the power of VC to thousands of scientists, rather than a few dozen as was previously the case. …
 
“The user interface of Science United is designed to appeal to a wider audience than the current BOINC user base, which is mostly male and tech-savvy. For example, Science United has no leader boards. …
 
“Science United is also intended to serve as a unified “brand” for VC, so that it can be marketed more effectively.”
 
Another project, Folding@home, does similar work with spare cycles:
 
“What is distributed computing?… the calculations we wanted to do would take about a million days on a fast processor. So it seemed natural that we might be able to get it done in 10 days if we had access to 100,000 processors. By using distributed computing, we can split up the simulation, run each piece through a computer, and then combine them together afterwards. This really sped up our results.”
 
If you’ve ever wondered what you can do to help, this is something to consider.

POWER Virtualization Best Practices

Edit: Still good stuff

Originally posted May 12, 2020 on AIXchange

Tips for technical documentation like IBM Redbooks, Power Systems Virtual User Group replays. There’s always something to learn, or relearn.

I’m always interested in hearing how others configure and maintain their systems. Sure, many lessons can be learned by trial and error in the test lab, but through my years in tech, I’ve gained the most by listening to those around me. I try to soak up all information I can, whether it’s random scripts and tips and anecdotes, or technical documentation like IBM Redbooks, Power Systems Virtual User Group replays and the perspectives of IBM Champions and AIX experts on Twitter. There’s always something to learn, or relearn.

With this in mind, I want to note that the latest version of the IBM POWER Virtualization Best Practices Guide (version 4.0) came out in March. While it’s a fairly short document, it’s a dense read—but it’s definitely worth your time.

Here’s the table of contents:

1 INTRODUCTION
2 VIRTUAL PROCESSORS
            2.1 Sizing/configuring virtual processors
            2.2 Entitlement vs. Virtual processors
            2.3 Matching entitlement of a LPAR close to its average utilization for better performance
            2.4 When to add additional virtual processors
            2.5 How to estimate the number of virtual processors per uncapped shared LPAR
3 AIX VIRTUAL PROCESSOR MANAGEMENT -PROCESSOR FOLDING
            3.1 VPM folding example
            3.2 Tuning Virtual Processor Management Folding
            3.3 POWER7/POWER7+/POWER8/POWER9 Folding
            3.4 Relationship between VPM Folding and PHYP dispatching
4 AIX PERFORMANCE MODE TUNING
            4.1 Processor Bindings in Shared LPAR
5 LPAR PAGE TABLE SIZE CONSIDERATIONS
6 ASSIGNMENT OF RESOURCES BY THE POWERVM HYPERVISOR
            6.1 PowerVM Resource assignment ordering
            6.2 Overview of PowerVM Hypervisor Resource Assignment
            6.3 How to determine if a LPAR is contained within a chip or drawer/Dual Chip Module (DCM)
            6.3.1 Displaying Resource Assignments for AIX
            6.3.2 Displaying Resource Assignments for Linux
            6.3.3 Displaying Resource Assignments for IBM i
            6.4 Optimizing Resource allocation for affinity
            6.5 Optimizing Resource Assignment –Dynamic Platform Optimizer
            6.6 Affinity Groups
            6.7 LPAR_placement=2
            6.8 LPAR_placement considerations for failover/Disaster recovery
            6.9 Server Evacuation Using Partition Mobility
            6.10 PowerVM Resource Consumption for Capacity Planning Considerations
            6.11 Licensing resources (COD)
7 ENERGY SCALE
8 PROCESSOR COMPATIBILITY MODE
9 CONCLUSION
10 REFERENCES

The references consist of a number of links to various articles and documents, including a piece I wrote on setting up LPARs. Note that the link in the doc no longer works, but that article (“An LPAR Review”) is preserved in my archive. Even though it was originally published in 2009, the information is still relevant. But be sure to pull up the doc and check out all the references, because you’ll find lots of good information.

IBM Visual Insights Now With x86 Support

Edit: Does this interest you?

Originally posted May 5, 2020 on AIXchange

On Twitter, IBM’s David Spurway announced information for IBM Visual Insights V1.2.

On Twitter, IBM’s David Spurway posted announcement information for IBM Visual Insights V1.2:
 
IBM Visual Insights V1.2, previously called IBM PowerAI Vision, extends support to GPU-accelerated AI software on x86-based servers.
 
To accommodate the diversity of infrastructures used for AI solutions, IBM has expanded support of its award-winning software beyond POWER architectures to include Intel platforms. To avoid confusion in the marketplace, IBM PowerAI Vision has been renamed IBM Visual Insights.
 
In addition to its current functions, IBM Visual Insights V1.2 now offers:
 
* Support on x86-based servers with GPUs for training and inference
* Availability on IBM Cloud as a client-managed service
 
The announcement letter also includes specific hardware and software requirements. This solution runs in POWER9 and POWER8 environments:
 
            * Power System S822LC (8335-GTB) for HPC servers
            * Power System AC922 (8335-GTG and 8335-GTH) for HPC servers
            * Power System IC922 (9183-22X)
            * x86-based servers
            * Minimum system RAM 128 GB
 
            * Red Hat Enterprise Linux 7.6-Alt (POWER9)
            * Red Hat Enterprise Linux 7.7 (POWER8 and x86)
            * Ubuntu Server 18.04
            * NVIDIA GPU driver

Upcoming Digital Events

Edit: Did you attend any of these?

Originally posted April 28, 2020 on AIXchange

In light of the current worldwide circumstances, IBM and other vendors are moving their conferences and events to the digital realm.

In light of the current worldwide circumstances, IBM and other vendors are moving their conferences and events to the digital realm. Considering the registration and traveling expenses that are typically required to attend these events, you may find it well worth your time to check them out online.
 
For instance, IBM Think 2020 is now a digital event, with sessions running May 5-6. Register for a free pass.”
 
And IBM’s TechU spring sessions are already underway, and continue till July 2. Choose from one of three educational tracks: IBM Power Systems, IBM Storage and IBM Z/LinuxONE. Through replay sessions and presentations decks, you can catch up on any session that’s already taken place.
 
Here are some other events of interest:
 
Red Hat Summit 2020, April 28-29
 
Cisco Live, June 2-3
 
VMware Empower (online events for various geographic regions run from May-July)
 
If I left anything out, please let everyone know in the comments.

A Reminder About Remote HMC Upgrades

Edit: This is still the only way to do HMC upgrades.

Originally posted April 21, 2020 on AIXchange

A guide for remote upgrades on the HMC.

I love that it’s possible to do remote upgrades on the HMC. This isn’t anything new of course, so imagine my shock when I was recently in a conversation with someone who didn’t realize that remote HMC upgrades can be done via the network.
 
Happily, there’s a lot of good documentation about remote upgrades, and HMC upgrades in general. Let’s start with IBM Support. Here’s how to upgrade to Version 8.8.7, and here’s how to upgrade x86 or Model 7042 versions to Version 9. Both docs provide detailed information about various methods, so they’re helpful whether you’re using the network or some DVDs.
 
Here are directions for getting updates for your HMC. The IBM Knowledge Center has a deeper dive into this same topic.
 
Finally, here are a couple of things from my archive. In 2011 I wrote about remote upgrades. Spoiler: I loved them back when, too—so much so that I revisited the topic back in 2013.
 
Yes, a lot of this information is dated, but it remains applicable. Plus a refresher never hurts.

Checking the altinst_rootvg oslevel

Edit: Still good to know.

Originally posted April 14, 2020 on AIXchange

What to do when your AIX system has an altinst_rootvg.

Awhile back this interesting question came up: If you’re using alt disk to clone and upgrade your AIX system and later on you log into that system and see it has an altinst_rootvg, how do you determine the AIX version that’s on the inactive disk? What if you want to look at old data or information on that disk?
 
The answer is explained here:
 
Test2:/# oslevel -s
7100-01-04-1141
Test2:/#
 
Test2:/# lspv
hdisk0          00d342e7131c6b47                rootvg        active
hdisk1          00d342d637j21a59                 altinst_rootvg
Test2:/#
 
Test2:/# alt_rootvg_op -W -d hdisk1
Waking up altinst_rootvg volume group …
 
Test2:/#
 
Now the altinst_rootvg is in Active state also alt filesystems are in mounted state on the server
 
Test2:/# lspv
hdisk0         00d342e7131c6b47                 rootvg        active
hdisk1         00d342d637j21a59                  altinst_rootvg  active
Test2:/#
 
Test2:/# df
Filesystem    512-blocks      Free %Used  Iused %Iused Mounted on
/dev/hd4           524288      380024 28%    3091     3% /
/dev/hd2         3801088      396856 90%    34020    8% /usr
/dev/hd9var    2621440     2279336 14%    3560     2% /var
/dev/hd3          524288      499272    5%     105      1% /tmp
/dev/hd1          524288      507336    4%     102      1% /home
/proc                –              –    –          –       –  /proc
/dev/hd10opt    524288        278872    47%       3370     6% /opt
/dev/alt_hd4     524288        365552    31%       3871     3% /alt_inst
/dev/alt_hd1     524288        507336    4%         104      1% /alt_inst/home
/dev/alt_hd10opt 1310720   562888   58%        5694    4% /alt_inst/opt
/dev/alt_hd3     524288        499120    5%         116      1% /alt_inst/tmp
/dev/alt_hd2     5636096      184120   97%     103336   15% /alt_inst/usr
/dev/alt_hd9var 2621440    1835656  30%       6632      3% /alt_inst/var
 
We need to start the chroot shell within the alternate rootvg to identify the OS level/TL/SP information.
 
Test2:/# chroot /alt_inst /usr/bin/ksh
Test2:/# oslevel -s
7100-01-01-1216
Test2:/#
Test2:/# exit
 
You can return to the rootvg environment by exit from the alt shell.
 
Now it is really really important to put the cloned rootvg back to sleep.
 
Test2:/# alt_rootvg_op -S altinst_rootvg
Putting volume group altinst_rootvg to sleep …
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst
Fixing LV control blocks…
Fixing file system superblocks…
 
Related: The IBM Knowledge Center explains how to access data between the original rootvg and the new alternate disk and provides this intro to the alt_rootvg_op command. And in an AIXchange post from August 2019, I shared Chris Gibson’s tips on managing multiple instances of alt_rootvg.

Repurposing an Old Tape Library

Edit: When was the last time you had to worry about this?

Originally posted April 7, 2020 on AIXchange

Rob McNelly walks through customer troubleshooting from an IBM Redbook.

A client was looking to repurpose and virtualize an old TS3200 tape library. The idea was to connect it to a VIO server and then assign it to their IBM i clients.

However, errors came up when running cfgdev:

            cfgmgr: 0514-621 WARNING: The following device packages are required for device support but are not currently installed.
            devices.sas.changer

After downloading atape drivers from IBM Fix Central and loading them on the LPAR, the errors went away. An scm0 and some rmt* devices could be seen.

            # lsdev -Cc tape
            rmt0 Available 06-00-00 IBM 3580 Ultrium Tape Drive (SAS)
            rmt1 Available 06-00-00 IBM 3580 Ultrium Tape Drive (SAS)
            rmt2 Available 06-00-00 IBM 3580 Ultrium Tape Drive (SAS)
            rmt3 Available 06-00-00 IBM 3580 Ultrium Tape Drive (SAS)
            smc0 Available 06-00-00 IBM 3573 Library Medium Changer (SAS)

However, more errors emerged with an attempt to use mkvdev to assign the drive to the client LPAR. It took some searching to sort it all out.

First, this tip (scroll down):

You can virtualize SAS tape drives, but only per-LPAR. Meaning, on VIOS it shows Available, then you “mkvdev -vdev rmt0 -vadapter vhostx -dev vrmt0” as padmin ID. This will work ONLY if no atape is installed on VIOS.

Then this reminder from IBM Support:

Note 1: The VIOS cannot use Atape drive, such as the 3580, when attempting to virtualize a SAS tape drive to AIX client.

And finally, another forum comment:

… You cannot virtualize the 3100. In fact, you cannot virtualize any medium changer. You have to get rid of the virtual layer and assign a dedicated adapter to the LPAR and connect the 3100. Then you will have rmt0 and smc0.

Also, do not install Atape drivers on VIOS. This causes havoc on your virtual layer. If Atape is installed, then you will not be able to share the tape drive by mapping it to LPARs.

For completeness, refer to this IBM Redbook.

In this case, once they removed the devices, removed the drivers, reran cfgdev and then reran the mkvdev commands, it worked as expected.

I realize only a handful of people will ever deal with this issue, but I believe whenever a problem is solved, the solution is worth sharing.

Alert Impacts VIOS 3.1.1 NPIV Users

Edit: I still recommend following them.

Originally posted March 31, 2020 on AIXchange

Today’s post features a trio of valuable technical insights, courtesy of IBMers on Twitter.

Chris Gibson alerted me to this IBM technical bulletin (excerpt below) that impacts those running VIO server 3.1.1 and using N_Port ID virtualization (NPIV).

“This is only an issue using NPIV with VIOS 3.1.1. Client LPARs can experience I/O hangs and timeouts under heavy I/O load. We have seen this issue primarily during database backups, but can occur with any large sequential I/Os at high rates.

Affected VIOS Levels and Recommended Fixes
Minimum Affected Level: VIOS 3.1.1.0 devices.vdevice.IBM.vfc-server.rte 7.2.4.0
Maximum Affected Level: VIOS 3.1.1.10 devices.vdevice.IBM.vfc-server.rte 7.2.4.0
Fixing Level: VIOS 3.1.1.20 IJ23222

Interim Fix: iFix

Note: Installation of these fixes requires a reboot.”

On Twitter, Chris noted the summ tool (excerpt below), which is used to summarize and decode AIX I/O error messages:

“DESCRIPTION
“summ” is an AIX only diagnostic tool used to decode fibre-channel and SCSI disk AIX error report entries. It is an invaluable tool that can aid in diagnosing storage array or SAN fabric related problems providing the source of the error.

The script generates single line error messages enhancing the readability of the AIX error report. This is the same tool used by IBM Support worldwide and is considered safe to run in a production environment.

USAGE
The summ command can process results of the AIX error log from a file or from standard input…”

Also from Twitter, Gareth Coates cited his article (excerpt below) that lays out the merits and caveats of connecting baseboard management controller (BMC) systems to your HMCs:

“Summary

Some IBM Power Systems have a flexible service processor (FSP) and these systems are generally connected to, and managed by one or two Hardware Management Consoles (HMCs).

Other Power Systems have a Baseboard Management Controller (BMC) and can be controlled in a number of ways, including the Intelligent Platform Management Interface (IPMI) and the web-based graphical user interface (GUI). It is also possible to connect the system to an HMC. This article discusses the merits of doing so.”

Read the articles, and if you’re on Twitter, follow Chris and Gareth. Both consistently provide valuable technical insights.

A Great Reason to Revisit Quick Response (QR) Codes

Edit: I still wonder if I was the only one that did not know.

Originally posted March 24, 2020 on AIXchange

Want remote access to important customer-based service information? It’s actually just a QR code away.

I had to laugh at Kiran Tripath’s Tweet about quick response (QR) code locations. I can’t believe I’ve never actually tried to use those stickers to look up information.

The codes themselves enable remote access to customer-based service information, including details about problem analysis, parts removal and replacement, error codes, firmware license keys and service videos. In all seriousness, the stickers are quite useful.

Here’s a summary of QR code locations:

  • For codes 8335-GTC, 8335-GTG, 8335-GTH, 8335-GTW and 8335-GTX, the QR code label is located on the right flange located on the front of the EIA rack unit.
  • For codes 9006-12P, 9006-22C and 9006-22P, the QR code label is located on the top service cover. The system must be in the service position to view the label.
  • For codes 9008-22L, 9009-22A, 9009-41A and 9009-42A, the QR code label is located on the top of the server on the upper-right corner.
  • For codes 9223-22H and 9223-42H, the QR code label is located on the top of the server on the upper-right corner.
  • For code 9080-M9S, the QR code label is located on the front service card.
  • For code 9040-MR9, the QR code label is located on the right flange located on the front of the EIA rack unit.
  • For ESLL and ESLS storage enclosures, the QR code label is located on the left side at the rear of the system.
  • For the EMX0 PCIe Gen3 I/O expansion drawer, the QR code label is located on the front service card.

The IBM Knowledge Center has a list of QR code locations in table form.

Assuming you’re on the raised floor and have phone service, you can simply take a picture of the QR sticker that is physically on the server and learn what you need to know in an instant. It’s just one more option for gaining access to relevant information.

So have you all known about this for years and I’m just late to the party?

Now is the Time to Migrate from Older HMCs

Edit: There are still people that do not know about these changes.

Originally posted March 17, 2020 on AIXchange

Reminder: Support will soon end for x86-based HMC devices. Now is the time to migrate your data.

A coworker alerted me to this end of service notice for x86-based HMC devices (excerpt below):

“This is the last release to support the 7042 machine type. HMC V9R2 will support the 7063 machine type and Virtual HMC Apliances (x86/ppc64le) only.

“Note: iFixes and Service packs on top of V9 R1 M940 will be supported on 7042 machine types.”

I’ve previously discussed the big changes to the HMC, but as more of you move to POWER9 hardware, here’s a reminder: It’s time to migrate from the older HMCs that still reside in your environments.

While we’re on this topic, I was recently asked about adding a user ssh key to an HMC. The process is detailed in this IBM Knowledge Center doc (excerpt below):

“To enable scripts to run unattended between an SSH client and an HMC, complete the following steps:

1. Enable remote command execution….
2. On the client’s operating system, run the SSH protocol key generator. To run the SSH protocol key generator, complete the following steps:

a. To store the keys, create a directory that is named $HOME/.ssh (either RSA or DSA keys can be used).
b. To generate public and private keys, run the following command:
ssh-keygen -t rsa

The following files are created in the $HOME/.ssh directory:
private key: id_rsa
public key: id_rsa.pub

The write bits for both group and other are turned off. Ensure that the private key has a permission of 600.

On the client’s operating system, use ssh and run the mkauthkeys command to update the HMC user’s authorized_keys2 file on the HMC by using the following command:

ssh hmcuser@hmchostname mkauthkeys -–add <the contents of $HOME/.ssh/id_rsa.pub>

Note: Double quotes (“) are used in commands to ensure that the remote shell can properly process the command. For example:

ssh “mkauthkeys hscuser@somehmchost –add ‘ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQDa+Zc8+hn1+TjEXu640LqnVNB+UsixIE3c649Cgj20gaVWnFKTjcpWVahK/duCLac/zteMtVAfCx7/ae2g5RTPu7FudF2xjs4r+NadVXhoIqmA53aNjE4GILpfe5vOF25xkBdG9wxigGtJyOKeJHzgnElP7RlEeOBijJDKo5gGE12NVfBxboChm6LtKnDxLi9ahhOYtLlFehJr6pV/lMAEuLhd6ax1hWvwrhf/h5Ym6J8JbLVL3EeKbCsuG9E4iN1z4HrPkT5OQLqtvC1Ajch1ravsaQqYloMTWNFzM4Qo5O3fZbLc6RuJjtJv8C5t4/SZUGHZxSPnQmkuii1z9hxt hscpe@vhmccloudvm179′”

To delete the key from the HMC, you can use the following command:
ssh hmcuser@hmchostname mkauthkeys –remove joe@somehost

To enable passwords that prompts for all hosts that access the HMC through SSH, use the scp command to copy the key file from the HMC: scp
hmcuser@hmchostname:.ssh/authorized_keys2 authorized_keys2

Edit the authorized_keys2 file and remove all lines in this file and then, copy it back to the HMC: scp authorized_keys2 hmcuser@hmchostname:.ssh/authorized_keys2″

There’s also supporting information about enabling remote command execution and using the HMC remote command line (excerpt below):

“You can use the command line interface in the following situations:

  • When consistent results are required. If you have to administer several managed systems, you can achieve consistent results by using the command line interface. The command sequence can be stored in scripts and run remotely.
  • When automated operations are required. After you have developed a consistent way to manage the managed systems, you can automate the operations by invoking the scripts from batch-processing applications, such as the cron daemon, from other systems.”

Patching on the Fly with AIX and Linux

Edit: Have you been patching without rebooting?

Originally posted March 10, 2020 on AIXchange

Both operating systems offer ways to do limited patching without having to reboot the system.

AIX admins often manage Linux servers as well. If this fits your job description, you should know that each operating system has an option that allows limited patching to be done without a reboot. The AIX version is called Live Update; the Red Hat version is called Live Patching.

From the AIX site:
“Starting with AIX Version 7.2, the AIX operating system provides the AIX Live Update function that eliminates the workload downtime that is associated with AIX system restart that is required by previous AIX releases when fixes to the AIX kernel are deployed. The workloads on the system are not stopped in a Live Update operation, yet the workloads can use the interim fixes after the Live Update operation.

“IBM delivers kernel fixes in the form of interim fixes to resolve issues that are reported by customers. If a fix changes the AIX kernel or loaded kernel extensions that cannot be unloaded, the host logical partition (LPAR) must be restarted. To address this issue, AIX Version 7.1, and earlier, provided concurrent update-enabled interim fixes that allow deployment of some limited kernel fixes to a running LPAR. All fixes cannot be delivered as concurrent update-enabled interim fixes. Starting with AIX Version 7.2, you can use the Live Update function to eliminate downtime that is associated with the AIX kernel update operation. This solution is not constrained by the same limitations as in the case of concurrent update enabled interim fixes.”

This is from Red Hat:
“RHEL 8.1 marks the first release of RHEL 8 that will receive live kernel patches for critical and selected important CVEs, and no premium subscription is required. They will be delivered via the regular content stream and can be consumed via Yum updates. (Previously, these were on request for premium subscription customers and “hand delivered.”) The goal of the program is to minimize the need to reboot systems in order to get the latest critical security updates.”

For more, check out this Red Hat video and this discussion of AIX Live Update methodology. Chris Gibson has a best practices guide and presentation slides, and on March 25 you can take in his Power VUG session on Live Update best practices

Of course these are different tools for different operating systems, but take a moment to consider them in tandem: We continue to advance in a direction where more fixes can be applied on the fly. While I don’t imagine we’ll ever see a world completely free of reboots, this is welcome progress.

A Roundup of IC922 Coverage

Edit: Did you get up to speed on the server?

Originally posted March 3, 2020 on AIXchange

IBM unveiled the Power Systems IC922 server last month. Here’s a handy list of where to learn more about it.

IBM unveiled the Power Systems IC922 server last month. For those who haven’t had time to dig into the details, here’s an information roundup.

Videos

Nigel Griffiths and Gareth Coates webinar
First look with Nigel Griffiths
Five fast facts

Announcement Presentations and Information

* Slides: Part one and part two. Note: These slides track with the webinar video.

Data sheet (excerpt below):

“IBM Power System IC922 provides the compute-intensive and low-latency infrastructure needed to unlock business insights from trained AI models. POWER9-based, Power IC922 provides advanced interconnects (PCIe Gen4, OpenCAPI) to support faster data throughput and decreased latency. Accelerated, the Power IC922 supports up to six NVIDIA T4 GPUs.”

Announcement letter (excerpt below):

“The IBM Power System IC922 (9183-22X) server is a powerful new two-socket server designed for high-performance data analytics and computing workloads to help meet your accelerated computing needs.

“The Power System IC922 server offers:
* Two IBM POWER9 processor single-chip module (SCM) devices that provide high performance with 24, 32, or 40 fully activated cores and 2 TB maximum memory TPM2.0 for secure and trusted boot
* Up to 24 2.5-inch SAS3/SATA drives via 3 optional drive backplanes (each supporting 8 drives) and several optional SAS3/SATA adapters
* Up to six NVIDIA T4 GPU accelerators
* Two integrated USB 3.0 ports in rear and one USB 3.0 in front
* Two hot-swap and redundant power supplies: 2000 W 220 V AC with C20 receptacle
* Six hot-swap and redundant fans
* 19-inch rack-mount hardware with full rails and cable management arm included
….

“The Power System IC922 server supports two processor sockets, offering 12-core (2.8 GHz base frequency), 16-core (3.35 GHz base frequency), and 20-core (2.9 GHz base frequency) POWER9 technology-based system configurations in a 19-inch rack-mount with a 2U (EIA units) drawer configuration.

“The Power System IC922 server provides two hot-swap and redundant power supplies and 32 DIMM memory slots. Sixteen GB (#EM62), thirty-two GB (#EM63), and sixty-four GB (#EM64) supported memory features allow for a maximum system memory of 2 TB. Memory bandwidth is maximized with 16 DIMM memory slots populated.

Multiple I/O options in the system, including:
* Two PCIe x16 3.0 FHFL slots (supports double-wide accelerator)
* Two PCIe x16 4.0 LP slot
* Two PCIe x8 3.0 FHFL slots (physically x16)
* Two PCIe x8 3.0 FHFL slots
* Two PCIe x16 3.0 LP slots
* Up to 24 2.5-inch SAS3/SATA drives via 3 optional drive backplanes (each supporting 8 drives) and several optional SAS3/SATA adapters.”

Additional Coverage

IBM Redbook: IBM Power System IC922―Technical Overview and Introduction
IBM IT Infrastructure blog: “Complete your AI puzzle with inference”
IBM Marketplace IC922 summary (excerpt below):

“Engineered for AI inference, the IBM Power System IC922 provides the compute-intensive and low-latency infrastructure you need to unlock business insights from trained AI models. The POWER9-based Power IC922 provides advanced interconnects (PCIe Gen4, OpenCAPI) to support faster data throughput and decreased latency. Accelerated, the Power IC922 supports up to six NVIDIA T4 Tensor Core GPU accelerators.”

Take the time to go through this material and you’ll be an IC922 expert in no time.

Videos Highlight the Latest on Storage

Edit: Have you signed up yet?

Originally posted February 25, 2020 on AIXchange

Learn what’s new and on the horizon in the world of storage through videos produced by the IBM Washington Systems Center.

Are you familiar with the IBM Washington Systems Center? This group produces video presentations that cover a wide range of storage topics.

Check out their YouTube page. And―for now―you’ll find presentation abstracts on the developerWorks Connections Platform (excerpt below):

“Welcome to the IBM North America Washington Systems Center-Storage Accelerate with IBM Storage program. This blog is for our program of technical webinars covering a variety of storage topics. Please subscribe to this blog or join our mailing list to hear about upcoming technical webinars. To join our mailing list please send an email to accelerate-join@hursley.ibm.com.”

Here are abstracts from the most recent meetings (also excerpted from that page):

[Jan. 21] Cisco/IBM c-type SAN Analytics
As Storage Area Networks continue to evolve so does the need to understand how and what is causing issues. This session will help introduce the newest Analytics features of IBM C-type. This technology as included in the 32Gbps platform provides the visualization that can answers to many of the questions that administrators are forced to answer every day. This SAN Analytics data is driven to provide visibility of the next generation high speed connectivity deployments which include 32Gbps Fibre Channel carrying both SCSI and NVMe traffic. This presentation will explain how to leverage the inbuilt always on technology of C-type SAN Analytics. We will discuss how to architect, deploy and use the data to solve real world issues that every administrator faces from differing IO sizes, Exchange Completion times, storage device response times, and slow drain information as well.

[Jan. 14] Brocade/IBM b-type SAN Modernization
Do you need to modernize your Storage Network? Be Ready for what’s New and Beyond. Storage technologies are now based on silicon … not spinning disks … and massive changes are underway (NVMe over Fibre Channel being just the first step).

These changes in storage will place pressure squarely on the storage network so you must be ready with a modern SAN infrastructure to ensure optimum performance from your applications. If not, then bottlenecks move into the network, slowing your workloads. IBM b-type Gen 6 SANs are the proven modern infrastructure ready for storage today and as it evolves. Learn about the products and powerful tools that will assess and prepare your network for what is coming over the next decade.”

Although most of us concentrate on servers and operating systems, all of our servers connect to storage. That’s incentive enough to learn about new function and capabilities, and discover what’s on the horizon.

PowerVM Capability Offers Value to SAP HANA

Edit: Have you tried this yet?

Originally posted February 18, 2020 on AIXchange

A new PowerVM capability called Virtual Persistent Memory (vPMEM) provides a fast restart of workloads during outages in SAP HANA environments.

Jay Kruemcke recently examined a new PowerVM capability called Virtual Persistent Memory (excerpt below):

“One of the drawbacks of in-memory databases is the amount of time required to load the data from disk into memory after a restart. In one test with an 8TB data file representing 16TB database, it took only six minutes to shut down SAP HANA, but took over 40 minutes to completely load that database back into memory. Although the system was up and responsive much more quickly, it took a long time for all data to be staged back into memory….

“In October 2019, IBM announced Virtual Persistent Memory (vPMEM) with PowerVM. Virtual Persistent Memory isn’t a new type of physical memory but is a way to use the PowerVM hypervisor to create persistent memory volumes out of the DRAM that is already installed on the system. vPMEM is included at no additional charge with PowerVM.

“The data in memory is persistent as long as the physical Power server is not powered off. By maintaining data persistence across application and partition restarts, it allows customers to leverage fast restart of a workload using persistent memory. An added benefit is that there is no difference in application performance when using vPMEM because the underlying DRAM technology is the same as for the non-persistent
memory.

“Although vPMEM does not preserve memory across a server power down, most IBM Power customers seldom power off their systems because of the many reliability features that are built into Power hardware. vPMEM provides for fast restart in the vast majority of planned maintenance and unplanned outages without compromising the performance of HANA during normal use.”

The excerpt below comes from IBM’s Asim Khan, program director, offering management for SAP HANA on POWER:

“The feedback we received from our HANA on POWER clients was that Power Systems clients do not face the challenge of frequent hardware-related outages. In fact, one large SAP HANA client told us they haven’t had to reboot their Power System for 30 months straight due to any hardware outages. The Forrester TEI Study of IBM Power Systems for SAP HANA determined an average reduction of 48 hours of annual downtime after customers move from non-POWER environment to Power Systems. What POWER clients say they want is a solution that helps to fast restart the environment when there’s a software-related planned (patch the OS or SAP environment) or unplanned outage in their SAP HANA environment.”

As you evaluate SAP HANA, hopefully this technology factors into your decision-making calculus. And if you want to get deeper into vPMEM, check out this IBM Techdocs Library whitepaper by Jim Nugen and Olaf Rutz. Related docs are also available.

developerWorks Connections Pages in Transition

Edit: What a loss to the community.

Originally posted February 11, 2020 on AIXchange

Today’s post is a friendly (and important) reminder that the developerWorks Connections platform is set to permanently shut down on March 31.

You’re probably aware that IBM is sunsetting the developerWorks Connections platform.

Hopefully you’ve bookmarked the new locations of your favorite AIX content. In my case, that includes Nigel GriffithsGareth CoatesChris Gibson and the AIX Virtual User Group.

While many of the old links still work, the Connections platform is set to permanently shut down on March 31, 2020. If all this is news to you, be sure to read the IBM FAQ (excerpted below):

NEW: On January 2, 2020, the majority of the developerWorks Connections platform will be taken offline and made unavailable. If your Connections group is not active, no further action is needed.

A small number of communities that have requested a sunset extension will be kept online and placed in read-only mode. You will receive an error message if you try to save new content (wikis, posts, forum messages) after that date. Affected pages will have a banner at the top identifying it as a target for final removal on March 31, 2020. This removal includes all community blogs, wikis, forums, activities and files….

Q. Why are these Connections pages going away?
A. IBM is consolidating major content portals to improve the customer experience.

Q. What specific content of mine will be impacted and removed?
A. Your community and its app will no longer be available including: Activities, blogs, files, forums and wikis.

All URLs starting with https://ibm.com/developerworks/community/ and including:

  • https://www.ibm.com/developerworks/community/groups/*
  • https://www.ibm.com/developerworks/community/wikis/*
  • https://www.ibm.com/developerworks/community/activities/*
  • https://www.ibm.com/developerworks/community/blogs/*
  • https://www.ibm.com/developerworks/mydeveloperworks/blogs/*
  • https://www.ibm.com/developerworks/community/files/*
  • https://www.ibm.com/developerworks/community/forums/*
  • https://www.ibm.com/developerworks/community/profiles/*
  • https://www.ibm.com/developerworks/community/homepage/*

Q. What will happen to the information that is currently published on developerWorks Connections? Where will I be able to find it?
A. The majority of existing content and posts will be reviewed by the content owners and moved to the appropriate IBM website… Backups of all Connections content will be made prior to January 2 as a precaution. Select URL redirects will be created to help customers find content in its new home if it has been moved.

New Redbooks Catalogs Key AIX Enhancements

Edit: You should be able to bypass the registration

Originally posted February 4, 2020 on AIXchange

Published last month, the new IBM Redbooks publication covers security and networking enhancements, virtualization and cloud capabilities, PowerVM features—plus plenty more.

Glad to have come across this current Redbooks pulication, titled “IBM AIX Enhancements and Modernization”:

This publication, which was finalized in January, is a reassuring reminder of the ongoing work that’s bringing significant new and enhanced features and function to my favorite operating system.

The Redbooks publication covers security and networking enhancements, virtualization and cloud capabilities, AIX and PowerVM features, disaster recovery and high availability and plenty more, including a handy chapter on AIX fundamentals.

For some specifics, Chapter 1, General Enhancements, features these topics: Live Update; Server Flash Caching; Multipath I/O; iSCSI software initiator; Network Installation Manager; Logical Volume Manager; JFS2; Multiple alternative disk clones; Active Memory Expansion; nmon current processor frequency reporting; National language support; and AIX Toolbox for Linux Applications.

The Security Enhancements chapter describes recent IBM AIX security enhancements, including AIX trusted execution, AIX secure boot, multi-factor authentication, “Cryptolibs” and address space layout randomization.

Finally, the AIX Fundamentals chapter covers things like JFS2, RBAC, EFS, AIXpert and MultiBOS. You get the idea. There’s much more of course, so download it or order a hard copy. (Registration with IBM is required.)

PowerVC 1.4.3 Install Error

Edit: I assume you are not still seeing this issue

Originally posted January 28, 2020 on AIXchange

Rob finds a simple solution to a customer’s PowerVC install fail.

A while back I helped out on a PowerVC 1.4.3.0 installation. The customer was getting this error:

ERROR powervc_oslo.config.data_utils [-] End Of File (EOF) in read_nonblocking().

Exception style platform.

<pexpect.spawn object at 0x7f625b1ce990>

version: 2.3 ($Revision: 399 $)

command: /usr/bin/mysqladmin

args: [‘/usr/bin/mysqladmin’, ‘password’]

searcher: searcher_re:

    0: re.compile(“New password:”)

buffer (last 100 chars):

before (last 100 chars): .sock’ (2)’

Check that mysqld is running and that the socket: ‘/var/lib/mysql/mysql.sock’ exists!

As is often the case when I encounter a problem, I started with a web search—I should say, I started and finished. Seriously, I’m not sure I’ve ever found such a simple answer so quickly. The exact error came up on the first return, along with this solution (excerpt below):

“Resolving The Problem
Please proceed with the steps below:
Please run
ln -s /usr/bin/resolveip /usr/libexec/resolveip

Then attempt this command

/root/powervc-1.4.3.0/lib/config.sh –interface yourinterface –ipversion 4 –ipoption default –firewall yes –silent 0 –hostaddress yourfqdn –operation configure –logfile /opt/ibm/powervc/log/yourlastinstalllogfilename


* where yourinterface is the name of the Network adapter used to configure your PowerVC server.
* where yourlastinstalllogfilename is the name of the powervc install log from the current failed install
* where yourfqdn is the fully qualified domain name of your PowerVC server
Once it completes with successful then run the below command: powervc-restore.”

Hopefully you won’t run into this issue, but if you do, it is easily resolved.

Using nmonchart; Verifying IBM Downloads

Edit: Still good information

Originally posted January 21, 2020 on AIXchange

Courtesy of Russell Adams, here’s some useful information on performance monitoring tool nmon and how to more securely verify IBM downloads.

Recently I found some interesting information on Russell Adams’ blog. This is about nmon, the performance monitoring tool (excerpt below):

“Recently I was learning about Nigel’s new efforts with JSON and web based graphing, and came across his nmonchart tool. This new tool has dynamically zooming graphs via javascript directly in your browser from a single file! I had to try it, and I’m very impressed.

Running it against a single file was trivial and the resulting HTML loaded into a browser without issues to view the data. However when I wanted to view several days of data in separate files there wasn’t an option.

A few minutes later and some awk magic result an awk script to combine data files for reading into nmonchart.”

He includes a sample script and some files, so be sure to read the whole thing.

In another blog entry, he covers an option for verifying IBM downloads made to the NIM server (excerpt below):

“IBM doesn’t typically publish a simple text file of checksums with any of their POWER or AIX downloads. They do include an XML file for Download Director.

They do make an attempt to allow customers to validate the download using that XML file in VIO downloads by providing a file called ck_sum.bff. The customer is instructed to execute ck_sum.bff against the directory of downloads to confirm the downloads.

This raises many red flags for me for violating security best practices. I should never execute untrusted code from any source on my systems! The typical place this would run is on a NIM system or AIX box as root! I strongly advise against using this method.

Given IBM does have the checksums in an XML file, we can extract them for validation without using untrusted code. I accomplished this on a Linux box with XSLTPROC, but I believe this tool may be available for AIX as well.”

Again, there’s a script and sample output with this post.

Managing POWER8 Update Access Keys

Edit: Hopefully you are on top of this with your machines.

Originally posted January 14, 2020 on AIXchange

As the initial warranty periods on POWER8 systems start to expire, update access keys will need to be replaced. Here’s how.

A customer recently experienced issues with POWER8 firmware access keys (page excerpt below):

“POWER8 (and later) servers include an “update access key” that is checked when system firmware updates are applied to the system. Update access keys include an expiration date. As these update access keys expire, they need to be replaced using either the Hardware Management Console (HMC), the Advanced Management Interface (ASMI) on the service processor, or directly using the update_flash command.”

To get a new update access key, follow the steps below (excerpted from the page):

“An expired update access key can be acquired by visiting the following website:
http://www.ibm.com/servers/eserver/ess/index.wss

Step 1: Gather Data—You will need to know your server’s machine type, serial number, and the country of installation in order to obtain a new update access key.

Step 2: Login to the “Entitled System Support” web page using your IBM WebID.
http://www.ibm.com/servers/eserver/ess/index.wss

Step 3: Navigate to the “My Entitled Hardware” section and select “View, download and request update access keys.”

Step 4: Enter your server’s machine type, serial number and country of installation. You may enter each server machine type and serial number individually, or you can upload a list of servers.

Step 5: On the “Update Access Key Request Confirmation” page, you’ll see the result of your request(s). You’ll have the option of viewing your server’s new update access key directly, downloading the key via a file or sending it to your IBM Web ID via email.

Note: If you were not provided with a new update access key for your server, click on “contacts” on the right side of the screen to engage with IBM.

Step 6: After retrieving your server’s new update access key, return to your server and enter the key on your HMC (using the “Enter CoD Code” feature) or via the ASMI (using the “CoD Activation” feature).”

My client was getting reminders on their HMC that it was time to input a code, but after going to the IBM Entitled Systems Support (ESS) site and selecting the update access key option, the dates and the keys did not change. This IBM Support Page explains why:

“The initial Update Access Key is loaded during the manufacturing process and is dated to align with the expiration of the initial warranty period. The term of any subsequent agreement (e.g. IBM hardware maintenance service agreement or special bid agreement) may vary. Therefore, replacement Update Access Keys are issued with a duration of 180 days.”

In short, the initial warranty hadn’t expired. When they purchased their system, the date of expiration was a few years out. Once that date was reached and the new maintenance agreement kicked in, they could update the keys with no problem.

Going forward, they’ll need to update keys every 180 days. This will become more common as the initial warranty periods of these POWER8 systems run out, so keep it in mind if you haven’t reached that point with yours.

ESS Mobile App Updates

Edit: Have you tried this?

Originally posted January 7, 2020 on AIXchange

The latest version of the IBM Entitled Systems Support mobile app puts asset management at your fingertips.

As a business partner, customers often ask me for help with downloading software, updating user access keys, etc. Typically I’d go to ibm.com/eserver/ess to help with these requests. Recently though I received an interesting email about the latest version of the IBM Entitled Systems Support mobile app, which provides another way to get this information. Below is an excerpt from the email:

“ESS mobile: new version 5 available now
We are excited to announce a new major release for the ESS mobile app.

Our team has worked tirelessly and rebuilt the ESS mobile app from the grounds up, greatly improving multiple components and creating a starting point for fast updates going forward.

Highlights of version 5 release include:

  • login is greatly improved and will work more consistently (for real this time)
  • enhanced speed and fluidity of screen actions and transitions
  • purchase of Elastic Capacity on Demand is back
  • push notifications for your Elastic CoD orders and generated codes on your mobile device
  • share your generated Elastic codes or any other available keys & codes directly with your email or messaging apps
  • use the mobile app in your browser; it is available as a Progressive Web Application (PWA) under https://www.ibm.com/eserver/ess/mobile/
  • more information about PWA technology can be found here: https://en.wikipedia.org/wiki/Progressive_web_applications
  • PWA apps work in Google Chrome, Apple Safari and Firefox (non-private mode)
  • PWA apps do not work in Microsoft Edge and Internet Explorer. Other webkit-based browsers may work (not tested)
  • a quick launch icon for PWA version is available on the right side of the ESS website”

And to recap the basics about ESS mobile:

“IBM ESS mobile is asset management at your fingertips. Quickly view and access all your Power systems, available keys and codes, run reports and purchase and assign Elastic Capacity on Demand: any time, all the time. Your IBMid profile is fully synchronized across both the website and mobile app: access, registration and all activities are always shared.”

To test the app without downloading it to your phone, go here (registration required).

This appears to be the same information I can get from my phone. I have these options in the Reports section:

SWMA Inventory report: Active
SWMA Inventory report: Expires within three months
SWMA Inventory report: Inactive
Electronic Proof of Entitlement (ePoE) for Power report
Key Version/Release Inventory report: Active keys
SWMA Inventory report: All
Permanent key installation commands
ESS software update orders report.
Capacity Reset report
ESS downloadable product list

If I’m not at my desk, it’s great to have a way to access this information. How about you? Does this app interest you, or do you exclusively use a desktop?

It Adds Up

Edit: I mention Ragnar and triathlons in this piece, I was not able to do either in 2020, but that did not stop me from running and staying active and being ready for the time when I am once again able to compete. And yes, I am still doing my Duolingo lessons, and I still have a long way to go.

Originally posted December 17, 2019 on AIXchange

Doing just a little bit on a regular basis can have a lasting impact on your career (and life).

This post is almost three years old, but these equations at the heart of it have stuck with me:

1.01 ^ 365 = 37.8
0.99 ^ 365 = 0.03

Below is an excerpt from that post:

“One of my favorite concepts of finance is compounding. Compounding is simply when we make an investment and reinvest the return on that investment. So, if you invest in a house that you rent out to other people, then if you take that rent and buy new houses, you are compounding your investments.

In other words, compounding is a process where the value of an investment is consistently reinvested over time. This is the most powerful concept when it comes to maximizing growth on investments. That is why I love this poster.

In the first case, we are multiplying 1.01 with 1.01 for 365 days. In the other 0.99 is multiplied with 0.99 for 365 days. There is just this tiny difference but over a year, these kinds of differences add up to crazy differences.”

As I wrote about a year ago, I’ve been making small adjustments to my lifestyle. I’ve been eating better, I’ve been more active, and I’m seeing results. I have been competing in mini triathlons for a few years now, and I ran my first sprint triathlon in 2018. I was satisfied with my time, but I entered the same event and completed the same course this year, in 14 fewer minutes.

I run most days at this point. I have more endurance. I have more strength. I like the beach. The closest one to me here in Phoenix is in Puerto Penasco, Mexico, about a three-and-a-half-hour drive. Since I’m there on a somewhat regular basis, learning some Spanish seemed appropriate. As with my fitness regimen, a little adds up when it comes to taking on a new language. I spend about 10-15 minutes a day doing Duolingo lessons on my phone. That may not sound like much, but I find I now understand rudimentary words and phrases. If I tune to Spanish language radio or watch Univision, I notice common words and phrases.

You can apply this to anything. If you’re looking to pick up AIX, or Linux, or a new software package, diving in and drinking from a fire hose might work; for the sake of your career, it may even be an essential course of action. But small, consistent effort over time does produce results. Cramming the information into your brain might have got you through that college term paper, but I’d argue that doing just a bit on a regular basis over the long term offers a lasting impact as far as actual learning and/or development.

I know I’ll continue to keep it slow and steady as I prepare for my next Ragnar event.

NIM Users Must Get Current on AIX

Edit: Is there anyone who has not updated their NIM servers yet?

Originally posted December 10, 2019 on AIXchange

A recent change in OS versioning requires NIM servers to run on the latest versions of AIX.

AIX Network Installation Management (NIM) can be used to build out VIO servers. If you rely on this valuable tool, take note of a recent change in OS versioning that requires the NIM server to run on the latest versions of AIX. As of VIOS 3.1.x, AIX 7.2 is now running under the covers (prior versions ran AIX 6.1). Because your NIM server’s AIX version must be at least equal to the AIX version that’s managing NIM clients, you should update to 7.2 sooner rather than later.

Which specific AIX version do you need? Chris Gibson recently provided a link on Twitter to the minimum NIM master levels for VIOS clients, which should clear up any confusion:

If using NIM to backup, install or update a VIOS partition, the NIM master must be greater than or equal to the levels shown below.

VIOS ReleaseVIOS Level MinimumNIM master level
VIOS 3.1.0VIOS 3.1.0.21AIX 7200-03-03
 VIOS 3.1.0.10AIX 7200-03-02
VIOS 2.2.6VIOS 2.2.6.417200-03-03
 VIOS 2.2.6.327200-03-02
 VIOS 2.2.6.317200-03-01
 VIOS 2.2.6.217200-02-02
 VIOS 2.2.6.107200-02-01
 VIOS 2.2.6.07100-02-00
VIOS 2.2.5VIOS 2.2.5.607200-01-06
  7200-02-04
  7200-03-04
  7200-04-01
 VIOS 2.2.5.507200-03-01
 VIOS 2.2.5.407200-02-02
 VIOS 2.2.5.307200-02-01
 VIOS 2.2.5.207200-01-02
 VIOS 2.2.5.107200-01-01
 VIOS 2.2.5.07100-04-03

The complete chart found with the link also includes information for earlier VIOS versions as well AIX 6.1 and AIX 7.1. But again, going forward, to install VIOS 3.1.x you must be at AIX 7.2, so these recommendations are the ones that matter. If your NIM and VIO servers aren’t yet updated, get moving.

Virtualizing Storage with LPM via vSCSI

Edit: vSCSI or NPIV both work for LPM

Originally posted December 3, 2019 on AIXchange

Moving to POWER9? Live Partition Mobility (LPM) can be used to virtualize storage on LPARs connected to VIO servers via vSCSI.

Recently I was asked if it’s possible to perform Live Partition Mobility (LPM) operations on LPARs that are connected to VIO servers via vSCSI. Since N_Port ID virtualization (NPIV) is so common now, I actually had to think about it, but the answer is yes, you can virtualize your storage either way. In fact originally, LPM was delivered and worked exclusively with vSCSI.

This means that shops that are preparing for the arrival of new POWER9 hardware should be able to migrate with no downtime, assuming their older systems are virtualized and that the SAN is zoned properly. Below is an excerpt from the IBM Developer page on LPM:

“You must have a minimum of two machines, a source and a destination, on POWER6 or higher with the Advanced Power Virtualization Feature enabled. The operating system and application must reside on a shared external storage (Storage Area Network). In addition to these hardware requirements, you must have:

  • One hardware management console (optional) or IVM.
  • Target system must have sufficient resources, like CPU and memory.
  • LPAR should not have physical adapters.

Your virtual I/O servers (VIOS) must have a Shared Ethernet Adapter (SEA) configured to bridge to the same Ethernet network which the mobile partition uses. It must be capable of providing virtual access to all the disk resources which the mobile partition uses (NPIV or vSCSI). If you are using vSCSI, then the virtual target devices must be physical disks (not logical volumes).”

Also note the limitations of LPM, as Jaqui Lynch explains (excerpted below):

“LPM is not a replacement for disaster recovery or high availability solutions. It’s designed to move a running LPAR or a properly shutdown LPAR to a different server. It cannot move an LPAR with a crashed kernel or from a failed machine. If all the prerequisites are met, LPM can be used to move an AIX, IBM i or Linux LPAR from one LPM-capable POWER server to another compatible server. By compatible I mean that it has to meet the requirements for the Power Systems server, the management console, the VIOS (PowerVM) and the LPAR itself.”

Finally, this reminder. When you purchase POWER9:

“Each system will have PowerVM Enterprise Edition built in, and IBM is helping customers migrate by providing 60-day temporary licenses for existing machines that don’t already have PowerVM Enterprise Edition. This will allow you to use live partition mobility to migrate running workloads from existing POWER7 or POWER8 machines to your new POWER9 machine.”

Take a Sneak Peek at New Hardware

Edit: This is still a great tool

Originally posted November 26, 2019 on AIXchange

The IBM Interactive Product Tour Catalog allows you virtually preview 3D representations of new hardware.

Are you familiar with the IBM Interactive Product Tour Catalog?

Just go here, select Systems and Servers, and then click on IBM Power Systems.

From there, choose Enterprise or Scale-out, and IBM POWER9 or POWER8.

Finally, choose an actual system.

It may take a few seconds for the page to load, but it’s worth the wait. For your system of choice you can see different views of the system and get some basic information.

Of course there’s important and useful information in standard IBM presentations (like this one), but the way the catalog allows you to visualize the new hardware makes a much stronger impression in my opinion. There’s just something about being able to look at the 3D image. It’s like being there on the raised floor, ready to open the box and install it in the rack. You can (virtually) take the lid off and see where the parts go, where the I/O cards go, etc. If you’re ordering a new system sight unseen, this sneak peek provides a unique perspective.

A Look at Cloud Automation Manager

Edit: I still like to watch a demo or video

Originally posted November 19, 2019 on AIXchange

This short demo video breaks down the process of using Cloud Automation Manager to create templates and deploy virtual machines.

We are always looking for ways to simplify our lives, but simplification is more elusive in some situations than others.

Consider a small environment where it makes sense to build LPARs by hand. The plan is to keep machines on site, and new builds are infrequent enough that extensive automation isn’t a pressing need.

In other environments where builds are automated, using AIX Network Installation Management (NIM) with some homegrown scripts to build new LPARs makes perfect sense.

Then there are those sites that are either drowning in new deploys or allowing users to build their own LPARs on demand. These types of installations can benefit from tools like PowerVC and IBM Cloud Automation Manager.

This video gives you a taste of what it takes to deploy existing VMs with Cloud Automation Manager and add AIX, IBM i and Linux applications to a cloud private catalog:

Here is an excerpt of the video’s description:

“Joe Cropper demonstrates how to configure IBM Cloud Private and Cloud Automation Manager to create an Oracle database self-service catalog entry within IBM Cloud Private. When the database is deployed, it provisions the underlying resources through IBM PowerVC.”

Demos usually resonate with me when I can see actual interaction with products. As valuable as articles and presentations can be, a picture—or, in this case, 9:32 of video—really is worth a thousand words. This video nicely breaks down the whole process of using this tool to create templates and deploy virtual machines.

Two Fast-Approaching Deadlines for Power Systems Users

Edit: Always good to nominate yourself or others, and take the AIX surveys

Originally posted November 14, 2019 on AIXchange

Technical editor Rob McNelly highlights upcoming deadlines for Champions nominations and to complete an AIX survey.

A quick reminder about two rapidly approaching deadlines.

First, the nomination period for IBM Champions ends on Friday, Nov. 22. Consult these tips and tricks if you plan to make a submission. Also consider this information, which comes from an internal newsletter for IBM Champions:

We are particularly looking for IBM Champions who advocate for and have expertise in Enterprise AI, Cloud, and Enterprise Linux. Nominate yourself and those who already advocate for our brand and story through speaking at events, blogging, customer references, videos, leading user groups, etc.

As noted, you can nominate yourself or someone else. As a Lifetime IBM Champion, I encourage you to get involved in the process.

Also, I want to amplify the message about the 2020 AIX survey. The deadline for that is also this Friday, Nov. 22:

Last year’s survey provided valuable insight into the needs and plans of AIX users around the world. We hope you’ll take a part in shaping the insights of this year’s survey. Watch for the results in early 2020.

Take the AIX survey here. Apologies for not providing more advanced notice. I’ll try to get further out in front of these things next time around.

AIX Security and IBM i TR Announcement Highlights

Edit: An end of an era.

Originally posted December 2020 in the final issue of IBM Systems Magazine

Technical Editor Rob McNelly breaks down the latest IBM i and AIX announcements

In October, IBM made a series of announcements covering an array of products and offerings, including IBM Power Systems™ hardware enhancements, new AIX® features and function and the latest IBM i technology refreshes (TRs). 

AIX Security and Availability Updates 

Along with security, high availability is emphasized with the AIX announcements. On that note, support for logical volume (LV) encryption is a huge development.

Part of the AIX 7.2 Technology Level (TL) enhancements, LV encryption support provides for efficient encryption/decryption of data within an LV. While you won’t be able to encrypt rootvg at this time, you can encrypt other system LVs. As noted in the announcement, where available AIX will use on-chip cryptographic acceleration, allowing for data-at-rest encryption. (Learn more.)

PowerHA® SystemMirror 7.2.5 features a Geographic Logial Volume Manager (GLVM) configuration wizard that’s designed to simplify disaster recovery and enable clients to configure and orchestrate multiple parallel GLVM instances from a source to target. Assuming you have the bandwidth, multiple network streams should improve replication speed, and the addition of compression should make replication even faster and more efficient. If you lose one path between nodes, you can continue mirroring your data via another path through the improved network monitoring interface. And new statistics provide greater insight into replication status. With many cloud providers lacking storage-based replication options, GLVM can help facilitate cloud migrations. (Learn more)

The new create_ova command creates an open virtual appliance (OVA) package. An OVA package is an archive file that can be deployed as a VM and imported into any PowerVC environment containing a supported storage device or any cloud service that supports the Open Virtualization format (OVF) packaging standard.

IBM’s Chris Gibson discusses this in detail in his blog (“Creating Bootable AIX OVA Images”)

OVA could be used to migrate LPARs to another data center or to the cloud, assuming you can take the downtime associated with creating and sending the file over the network, and then using that file to deploy the server image. In tandem, GLVM enhancements and the addition of create_ova help simplify cloud migrations.

With Version 9.25.950 of the IBM Virtual HMC (vHMC), clients can use the HMC to backup and restore their Virtual Input/Output Server (VIOS), and also store VIOS backups on the HMC itself. For sites with limited VIOS skills, using a network installation management (NIM) server to restore VIO images in a disaster situation is a lot to ask. In small environments—say, one HMC and one POWER® server—recovery could be even more problematic with no other machine to host a NIM server. The HMC being a viable backup/restore option should simplify the process. We’ll see about scalability. This may not be great for backing up huge POWER server fleets’ VIO servers, but there’s a place for it. (Learn more)

IBM i TRs

TRs were issued for IBM i 7.4 TR3 and IBM i 7.3 TR9. With this announcement IBM delivered 15 new or enhanced open-source packages, including pigz, chsh, MariaDB and PostgreSQL for database flexibility. These additional technologies are intended to give developers greater freedom of choice when building applications on IBM i. (Learn more)

Also available are the new IBM i Playbooks for Ansible®, which automate tasks like provisioning cloud environments, deploying applications, applying security patches and much more. Automation is built in across IBM’s high availability/disaster recovery portfolio. Additional object types and improved application evaluation capabilities have been brought to Db2® Mirror for i, while BRMS delivers significant ease of use based on IBM i Services. (Learn more.)

With security, base authentication in IBM Integrated Web Services (IWS) no longer requires an HTTP server, and IWS also now enables the use of third-party security services. PowerSC MFA can now be run on IBM i alongside AIX and Linux®, providing a single dashboard for security management of any environment. Multifactor authentication is also built into the latest release of PowerVC, increasing security of private cloud and virtualized environments.

More Information

A complete summary of the Oct. 6 IBM announcement.

NPIV Client Boot Hanging Fix

Edit: I will miss sharing these tips and tricks

Originally posted November 5, 2019 on AIXchange

IBM Support troubleshoots an NPIV client boot hanging with Error Code 554.

On Twitter Chris Gibson highlighted an IBM Support document that troubleshoots an NPIV client
boot hanging with Error Code 554
, which I’ve excerpted below:

“AIX NPIV Client Boot Hanging with Error Code 554 due to max_xfer_size

Cause
This is cause [sic] by client’s max_xfer_size setted too high compared to VIOS Fiber Physical Adapter

Environment
AIX & NPIV

Diagnosing The Problem 
To confirm the boot failure is due to max_xfer_size’s value too high on AIX Client, perform a Boot Debug on AIX Client.

To enable this Boot Debug follow guidance:
http://www-01.ibm.com/support/docview.wss?uid=isg3T1019037

Then trap the following message: “open of /dev/fscsi0 returned 61

Resolving The Problem
Boot AIX client partition in Maintenance Mode, put back max_xfer_size value equal or lower to the one configured on the VIOS… .

Then you can set this value in AIX Client ODM (Maintenance Mode) with chdev command and perform a normal Boot.

Another way to address this would be to change “max_xfer_size” on Physical HBA to match Client’s one, but this change require VIOS’ reboot. This way seems more risky if one of LPARs is not fully operational from Multi-Pathing perspective.”

Be sure to read the entire thing.

I love sharing these tips for a couple of reasons. One is self-interest. Many times when I need information on an issue, my web search will return one of my long-forgotten AIXchange posts. And hopefully by raising these issues here maybe you’ll think of this post and/or that document, or you’ll simply remember to avoid making the max_xfer_size too high in the first place.

developerWorks Connections Pages Are Going Away

Edit: There is not a day that goes by that I do not click on an article that takes me to a landing page that tells me that information is lost forever. Still heartbreaking to lose so much knowledge.

Originally posted October 29, 2019 on AIXchange

Big news: IBM is sunsetting its entire developerWorks Connections platform.

I’m a long-time reader of IBM developerWorks, a vast collection of blogs, groups, forums and much more that’s maintained by some accomplished IBM technologists. Chris Gibson’s blog on AIX and PowerVMThe AIX Virtual User Group and the AIXpert blog are just a few of my most-clicked bookmarks.

So I was surprised by the recent announcement that IBM is sunsetting its entire developerWorks Connections platform on Jan. 1, 2020. Below is an excerpt of that announcement:

“As part of the overall IBM effort to reduce the number of duplicate client interaction portals and simplify the user experience with the digital side of IBM, we are sunsetting the developerWorks Connections platform which includes all URLs starting with https://ibm.com/developerworks/community/.

Affected pages will have a banner at the top identifying it as a target for removal on December 31, 2019. This removal includes all community blogs, wikis, forums, activities and files. Please send any questions or comments on our Support form.

On January 1, 2020, the developerWorks Connections platform and its apps will no longer be available.

Q. Why are these Connections pages going away?
A. IBM is consolidating major content portals to improve the customer experience.

Q. What specific content of mine will be impacted and removed?
A. Your community and its apps will no longer be available including: Activities, blogs, files, forums and wikis…

Q. Can I receive a copy of all of my content I’ve published?
A. Unfortunately due to technical constraints of this product, we are unable to provide that service.”

If you want to leave feedback, you can do so here.
https://developer.ibm.com/code/dw-connections-sunset-support/

As someone who has blogged for 12 years, I understand that a great deal of technical information, even if goes back several years, remains relevant and useful for today’s IT pros. If you’re a regular reader or even if you only occasionally peruse the developerWorks community pages, I encourage you to register your opinion with IBM. My hope is that if they hear from enough of us, they’ll at least put off the sunsetting, giving us more time to figure out how to archive this valuable content.

A Look at the Latest IBM Announcements

Edit: This is quite a bit to digest.

Originally posted October 22, 2019 on AIXchange

Here’s what AIX/IBM Power Systems users should know about the latest announcements.

IBM has come out with a bunch of announcements this month.

Here are some details on a few announcements and updates that should be of particular interest to AIX/IBM Power Systems users:

PowerHA SystemMirror V7.2.4 for AIX enhancements

  • Cross-cluster verification: Compare the test cluster and validate that the new production cluster has the same configuration parameters as the preproduction test cluster.
  • Availability metrics updates: With the availability metrics feature, PowerHA SystemMirror clients can see a timeline of events that have happened in the cluster, including the duration of each step. This release of the tool adds reports on configuration bottlenecks and historical averages.
  • Smart Assist currency updates:
    • Adds support for the latest version of the Smart Assist application.
    • Enables support for the latest version of Spectrum Protect (IBM Tivoli Storage Manager) with PowerHA.
    • Federated Security is enhanced to enable PowerHA SystemMirror commands that are not already supported for role-based access control (RBAC). Now the clmgr command returns a proper error if an unprivileged user performs an action that is not allowed. Commands currently enabled for RBAC can be tested for proper behavior on the basis of different PowerHA SystemMirror supported roles. Any deviation from proper behavior of RBAC-enabled commands will be fixed. An option is provided for users to list the commands that they are authorized to perform.
    • IBM Db2: With PowerHA SystemMirror, users can configure IBM Db2 with multiple databases and create a new monitor for each database, enabling them to handle individual DB failures without impacting the other DBs in the multiple parallel DB configuration.
  • Support for IBM SAN Volume Controller (SVC) role-based access: With this enhancement, users can configure a user name while adding an SVC cluster in PowerHA SystemMirror. The user name configured using the smit or clmgr command for an SVC cluster is used for all the SVC operations carried out for that particular SVC cluster.
  • In addition, PowerHA V7.2.4 supports SVC 8.1. PowerHA SystemMirror V7.2.4 for AIX encrypts the various passwords stored in the configuration database. This ensures that passwords at rest are encrypted for security compliance requirements.
  • Increases the maximum number of resource groups from 64 to 256.
  • IBM WebSphere MQ listener support: Clients can configure PowerHA Smart Assist for WebSphere MQ to start, stop, and monitor WebSphere MQ listeners.

AIX 7.2 TL4
The IBM AIX operating system provides clients with an enterprise-class IT infrastructure that delivers the reliability, availability, security, performance, and scale that is required to be successful in the global economy. IBM AIX 7.2 TL4 provides the following enhancements:

  • New levels of workload scalability
  • New levels of OS security
  • Enhanced scope and usability for AIX Live Update (high availability)
  • New I/O features and storage enhancements
  • File systems improvement

Additionally, the AIX Toolbox (across AIX 6.1, AIX 7.1, and AIX 7.2) is enhanced.

PowerSC Standard Edition V1.3
IBM Power SC provides a security and compliance solution that is optimized for virtualized environments on IBM Power Systems servers. Security control and compliance are key components needed to defend virtualized data centers and cloud infrastructure against evolving threats.

PowerSC Standard Edition V1.3 enhancements include:

  • New built-in compliance profiles
    • An SAP AIX hardening profile
    • A US Department of Defense-Security Technical Implementation Guides for AIX 7 profile
    • A Center for Internet Security (CIS) for AIX profile
  • IBM i integration through a new IBM i compliance automation profile
  • Support for alt-disk updates
  • Improvements within the Patch Management Component
  • Improvements within the reporting section
  • Multifactor authentication enablement

IBM PowerVM V3.1.1, IBM PowerVC V1.4.4, IBM Virtual HMC (vHMC) 9.1.940, and the IBM Cloud Management Console (CMC) Monthly Term offering
IBM PowerVM V3.1.1, which delivers industrial-strength enterprise virtualization for IBM AIX, IBM i, and Linux environments on IBM POWER processor-based systems, has expanded function and new management capabilities.

  • IBM PowerVM Hypervisor has been updated with new features that include:
  • Support for DRAM-backed persistent memory for faster VM restarts
  • Enhanced SR-IOV and vNIC support
  • LPM performance improvement

IBM PowerVM V3.1.1 has been updated with significant I/O performance, efficiency, usability, and scaling enhancements, including:

  • An increased number of NPIVs allowed per Fibre Channel port on the 32 GB Fibre Channel HBA. The current limit is 64, and more NPIVs per port will provide better utilization, density, and efficiency.
  • NPIV multiqueue (VIOS server) for improved performance and scalability.
  • Fibre Channel device labels for improved manageability.
  • iSCSI performance enhancements.
  • Improved VIOS Upgrade Tool capabilities.
  • SSP network resiliency improvements.

IBM PowerVC V1.4.4, which is designed to simplify the management of virtual resources in IBM Power Systems environments, has been updated with several enhancements to storage integration, usability, and performance, including:

  • IBM i license key injection
  • Hitachi GAD support
  • Initiator storage tagging
  • Live Partition Mobility (LPM) VMs to the original system after evacuation
  • Ability to pin VMs to a specific host
  • Ability to dynamically add PowerVC created VM to an HMC user resource role
  • Inactive partition migration
  • Image sharing between projects
  • New restricted administrator assistance role, with no deletion access
  • NovaLink support for multivolume attachment
  • FlexVolume driver support for Red Hat OpenShift

IBM Virtual HMC (vHMC) 9.1.940, released in conjunction with firmware FW940, which gives clients the ability to use their own hardware and server virtualization to host the IBM-supplied HMC virtual appliance, has been updated with the following enhancements:

  • User mode NX accelerator enablement
  • Support for SR-IOV logical ports in IBM i restricted I/O mode
  • Injection of IBM i license keys
  • Improved LPM error messages
  • Ability to manage resource roles automatically in HMC
  • Progress indicators for specific HMC commands
  • Email notifications for scheduled operations
  • IBM PowerSC Multi-Factor Authentication (MFA) in-band support for HMC

IBM Cloud Management Console (CMC) Monthly Term offering is a cloud-delivered monitoring service that connects to multiple HMCs and gives administrators an aggregate view of their entire Power infrastructure. CMC has been updated with the following enhancements:

  • Support for Enterprise Pools 2.0
  • Cloud host options that include North America and Europe

An Intro to AIX File Systems

Edit: It is always good to go back to basics now and again

Originally posted October 15, 2019 on AIXchange

The IBM Knowledge Center’s overview of AIX file systems is handy as both a refresher and introduction.

On Twitter, Soumya Menon (@soumya_159) linked to this IBM Knowledge Center overview of AIX file systems, excerpted below:

“A file system is a hierarchical structure (file tree) of files and directories.

This type of structure resembles an inverted tree with the roots at the top and branches at the bottom. This file tree uses directories to organize data and programs into groups, allowing the management of several directories and files at one time.

A file system resides on a single logical volume. Every file and directory belongs to a file system within a logical volume. Because of its structure, some tasks are performed more efficiently on a file system than on each directory within the file system. For example, you can back up, move, or secure an entire file system. You can make an point-in-time image of a JFS file system or a JFS2 file system, called a snapshot.

… To be accessible, a file system must be mounted onto a directory mount point. When multiple file systems are mounted, a directory structure is created that presents the image of a single file system. It is a hierarchical structure with a single root. This structure includes the base file systems and any file systems you create. You can access both local and remote file systems using the mount command. This makes the file system available for read and write access from your system.

… Some of the most important system management tasks have to do with file systems, specifically:

  • Allocating space for file systems on logical volumes
  • Creating file systems
  • Making file system space available to system users
  • Monitoring file system space usage
  • Backing up file systems to guard against data loss if the system fails
  • Maintaining file systems in a consistent state”

There’s much more, of course. Read the whole thing, and maybe check out some of the related overviews―including docs on managing, configuring and maintaining file systems―that are linked at the bottom of the page. Also be sure to open the table of contents (its appearance may vary depending on the browser being used) for links to a host of other topics:

Workload manager
Device nodes
Device location codes
Device drivers
Setting up an iSCSI offload adapter
PCI hot plug management
Multiple Path I/O
Targeted device configuration
Tape drives
USB device support
Caching storage data
Login names, system IDs, and passwords
Common Desktop Environment
Live Partition Mobility with Host Ethernet Adapters
Relocating an adapter for DLPAR
Loopback device
Files Reference
IBM Hyperconverged Systems
AIX on KVM
Installing
Networking
Operating system management
Performance tuning
Printing
Programming for AIX
Security
System management
Technical reference
Troubleshooting
IBM Workload Partitions for AIX

We may take the basics for granted, but it never hurts to get a refresher on these topics.

Speeds and Feeds at Your Fingertips

Edit: Some links no longer work.

Originally posted October 8, 2019 on AIXchange

A handy, consolidated spreadsheet lets you quickly compare rPerfs and CPW across different hardware generations.

Have you ever wondered how much faster POWER9 systems benchmark against earlier versions of the hardware? Well there’s an easy way to find out. Go to the IBM Lab Services UKI (United Kingdom and Ireland) POWER team web page to download a spreadsheet with this information.

As you can see, the most recent update came in August:

Date of ChangeDescription
13-Aug-2019Added 6 Core E980 servers
7-Sept-2018Added POWER9 servers (E950 and E980)
27-Feb-2018Added POWER9 servers (S922, S914 and S924) and updated POWER8 servers to reflect Spectre and Meltdown
12-Sep-2017Added S812 POWER8 servers and updated S822 POWER8 servers
8-Nov-2016Added C Model Power8 Servers (E850C, E870C and E880C)

To download the actual data, do what the large red text says and click here for consolidated spreadsheet.

I’m sure I’m not the only one who enjoys being able to compare rperfs and CPW across different hardware generations. While the POWER Facts and Features guides also have this information, this spreadsheet puts everything at your fingertips. Check it out.

Proactive Support Now Default Option on E980 and E950

Edit: I still recommend customers get enhanced support.

Originally posted October 1, 2019 on AIXchange

IBM’s Proactive Support is now included by default for all mid-range and enterprise POWER9 Systems.

For those who create hardware orders, IBM’s configuration tool now includes Proactive Support by default for all E980 and E950 configurations.

Under Proactive Support, customers are assigned an IBM account manager. Through frequent contact, including regularly scheduled calls, a strong working relationship develops. For IBM, the collaboration provides knowledge of the individual environment, making it possible to respond more quickly when a ticket is opened or when new vulnerabilities arise that require action.

Having worked with teams that have benefited from this type of enhanced relationship with IBM Support, I recommend you look into this offering.

This is from the Aug. 9 announcement letter:

“Clients have found significant value in IBM’s Proactive Support offerings on mission-critical systems, as this provides personalized support, proactive recommendations, and accelerated response times versus standard support. As a result, IBM is including the IBM Proactive Support in the default configuration for all Mid-Range and Enterprise IBM POWER9 Systems – for IBM AIX, IBM i, Linux, and SAP HANA workloads. Other configurations are also available.”

I saw a slide deck that highlights these points:

“IBM Proactive Support can streamline management, reduce costs and increase availability. This solution is designed to:

  • Provide a single source of customized support for hardware and software from multiple vendors, helping you improve the return on your IT investment.
  • Balance high availability with improved affordability to help maintain converged, virtualized and cloud-based IT environments.
  • Increase your IT staff’s productivity and free it to focus on more strategic initiatives.
  • Supply an optimized support model with global delivery through worldwide IBM technical centers.
  • Shorten the response time to failures and avoid a major impact on revenue, cost, customer satisfaction, reputation and more.”

A POWERful Ad

Edit: Customers will not buy what they do not know about. It looks like the videos have been removed.

Originally posted September 24, 2019 on AIXchange

Ads for IBM Power Systems must convey both technical and marketing information—undoubtedly quite the needle to thread.

A while back I discovered this on the Twitter feed of IBM’s Paulo Carvao (@pcarvao). It’s the best Power Systems ad I’ve seen yet. Check it out; it only takes a couple minutes.

The video not only highlights the world’s fastest supercomputers that run on POWER9 processors, it also mentions both AIX (“the No. 1 OS in the UNIX market”) and IBM i (“double-digit growth in 2018”). Also cited are Linux, SAP HANA running on POWER, and much more.

The tagline at the end states: “the time to bet your tomorrow on IBM Power Systems is now.”

If you’re suitably inspired by that, you may want to watch another video: “IBM POWER9: Let’s put smart to work.

This particular video came out in early 2018. See if you can spot (at the 1:10 mark) a tiny picture of me among many other IBM Champions.

What do you think of these videos? If it was your call, what would you like to see highlighted? I have my own ideas, but I recognize the challenges of making these sorts of ads. The trick is balancing technical and marketing information while making the whole thing visually interesting for non-technical viewers. Undoubtedly that’s quite the needle to thread.

Legacy Elastic CoD Offerings Being Withdrawn

Edit: Hopefully you took care of this

Originally posted September 17, 2019 on AIXchange

Those using legacy versions of Elastic Capacity on Demand (CoD) will soon need to upgrade their codes.

In July, IBM’s David Spurway tweeted about Elastic Capacity on Demand (CoD). The enablement codes used with legacy versions of Elastic CoD will soon be withdrawn, so if you’re on an older version, you’ll need to get new codes in short order.

Below is an excerpt from the IBM announcement letter:

“Hardware withdrawal: Power System Legacy Elastic CoD features

As we expand Elastic CoD via Entitled System Support Worldwide for flexible provisioning and usage on the web in minutes, we are withdrawing our legacy Elastic CoD offerings.

For additional support on how to administer these new features, see the Entitled System Support website.

Effective October 31, 2019 and April 30, 2020, IBM will withdraw from marketing the selected IBM Power Systems features listed in the Withdrawn products section. On or after the effective dates of withdrawal, you can no longer order these features directly from IBM. Field-installed features associated with the machine types not listed in this announcement continue to be available.

If a client continues to have prepaid Elastic CoD credits regarding this withdrawal, contact the ECoD project office to do a one-time migration to Elastic CoD via Entitled System Support.

For new orders, the client-requested arrival date (CRAD) can be no later than November 29, 2019 and May 29, 2020.”

The Implications of IBM’s Red Hat Acquisition

Edit: Have you noticed any disruptions to either company?

Originally posted September 10, 2019 on AIXchange

Red Hat’s acquisition by IBM is bound to take us in some interesting directions.

In July, Red Hat announced its acquisition by IBM. Senior vice president and CTO Chris Wright also posted this brief Q&A. I’ll excerpt a few key points:

“Q: Will the acquisition change the way Red Hat contributes to upstream projects?
A: No, Red Hat will continue to contribute to and participate in open source projects as we do today.

Q: Will the work that Fedora does, including all of the Editions, Spins, and Labs, change as a result of the acquisition?
A: Fedora’s build products will not be affected. All changes will continue to be driven by the Fedora Project.

Q: Are Red Hatters still free to work on open source projects outside of Red Hat?
A: Yes. Red Hat associates can contribute to and participate in open source projects outside of Red Hat as they do today.

Q: Will community projects be forced to support specific software or hardware?
A: No. Any inclusion of software and hardware support will continue to be driven by the community.

Q: Will community projects or Red Hat contributors be forced to use specific technologies?
A: No.

Q: Will the logos of Red Hat-sponsored projects change as a result of the IBM acquisition?
A: No logos of Red Hat-sponsored projects will change as a result of the acquisition.”

From the IBM side, check out this joint interview with IBM and Red Hat officials, this developer perspective, and this interview that covers IBM Cloud Paks and more. And a host of additional information can be found here.

The IBM-Red Hat union is bound to take us in some interesting directions.

IBM Guide Shows a Lengthy Road Ahead for AIX

Edit: It is good to know AIX will be around for years to come

Originally posted September 3, 2019 on AIXchange

IBM’s recently published guide lays out the company’s long-term commitment to AIX.

IBM recently published what it calls “an executive guide to the strategy and roadmap for the AIX operating system for IBM Power Systems.”

Don’t let the dry title fool you. This document has some good information about IBM’s long-term commitment to AIX. Since registration is required to access this 15-page PDF, I thought I would give you a taste.

This is from the executive letter:

“AIX has been the foundation of mission-critical workloads for a large and dedicated client community for more than thirty years. AIX has evolved to help drive cloud and enterprise AI initiatives for thousands of enterprise businesses and organizations around the world. And now, the team behind AIX have developed a forward-looking strategy and roadmap.”

This is from the introduction:

“As IBM Power Systems expands its portfolio to deliver value-driven offerings for the emerging Enterprise AI workload market, we remain committed to delivering a roadmap of innovation for both Power Systems hardware and AIX. The strategy focuses on supporting workload growth for the POWER architecture and solidifies an investment stream and market relevance for the AIX platform. Power Systems with AIX is the foundation for many core business applications and database environments.

AIX is deployed across a variety of industries such as finance, manufacturing, retail, telecommunications, healthcare, travel and government, along with many others. Today, it’s no secret businesses are experiencing growth as it relates to data. Fortunately, AIX is and will continue to be built to meet such growing demands for its community.”

The executive guide mentions hybrid multi-cloud, public cloud, PowerVM and PowerVC and more. A chart notes that most top companies—”most” as in 80-90 percent—run their businesses on IBM Power Systems, and they experience the lowest percentage of unplanned server downtime.

And as indicated in that long title, there is an AIX roadmap. It goes into the next decade… and the one after that. As of now, the plan for AIX extends to 2030.

Check out the guide’s closing statement:

“With thirty years of release engineering practices, AIX has a proven model for delivering new hardware support and software innovation through TLs. This approachminimizes disruption for AIX clients and ISVs by enabling them to easily adopt new capabilities because we are able to introduce all-new features via TLs. Experience has shown that new major AIX releases require additional qualification activities by clients and create a dependency on ISV certification and support statements before clients can adopt the new releases. TL’s minimize client disruption and the possible need for ISV’s to recompile, re-test and re-certify their software.

As IBM enhances AIX and plans updates, the following factors are considered. AIX has a very strong commitment to binary compatibility for APIs and command-line outputs across TL releases. Even across AIX major releases where compatibility impacts may be considered, this compatibility is an important goal. Binary compatibility changes are very carefully reviewed with new major releases. If new technology innovation in AIX were to challenge binary compatibility in a significant manner, a new major AIX release would be considered.”

AIX definitely isn’t going away. Take a moment to register and read the entire guide for yourself.

Using Server Message Block on AIX

Edit: Do you use this?

Originally posted August 27, 2019 on AIXchange

Server Message Block (SMB) 2.1 support is now available for AIX 7.2.

On Twitter, IBM’s Petra Bührer noted the availability of Server Message Block (SMB) 2.1 support for AIX 7.2. She followed that up by tweeting several IBM Knowledge Center links that explain how to use SMB.

On the SMB client filesystem:

The SMB server is a server that runs Windows Server 2012 or Windows Server 2016 server operating system. In each of these server operating system types, a directory can be exported as a share. This share can then be mounted on an AIX logical partition by using the SMB client file system. By using the SMB client file system, you can access the shares on SMB servers as local file systems on the AIX logical partition. You can use the SMB client file system to create, delete, read, and write files and directories on the SMB server and also to modify the access duration to these files and directories. However, you cannot change the owner or access permission of these files and directories.

These docs cover the mount commandthe smbcd daemonthe smbcstat commandthe smbctune.conf file, and the smbctune command.

This is welcome news for those working in an environment with UNIX and Windows servers. Typically separate IT groups are managing these servers, so getting this working requires some coordination. But having the option to share files between these systems can be beneficial.

POWER9 Quick Reference Guide Now Available Via App

Edit: I still use it

Originally posted August 20, 2019 on AIXchange

The IBM Power Systems Quick Reference Guide is available in iOS and Android formats.

In the span of a few days, I received an email and came across this tweet touting the availability of the IBM Power Systems Quick Reference Guide in iOS and Android formats:

GREAT NEW tool now available for POWER Systems sellers! Actually for ANYONE—partners and customers!

This app provides the information from the traditional PDF “POWER Quick Reference Guide” plus more, now in app format. Available for both iOS and Android.

IBM POWER Quick Reference is a mobile application for IBM POWER Server Reference guide.

Reference details available for POWER8 and POWER9 servers. It is a simple offline application which will help to check POWER server basic details quickly. It will act as a POWER server quick reference guide. The key benefits are to view POWER servers details like Processor, Memory, Disk etc.

Once application is installed, application will work even in offline. Application will be updated as and when new product launches.I was intrigued enough to visit the Google Play Store and install the app. For years I’ve been looking up the PDF version of the reference guide, but being able to do this no matter where I am is definitely a plus. The menus are simple and intuitive, and now all this valuable information is at my fingertips.

What do you think? Is this something you plan to use?

Managing Multiple Instances of altinst_rootvg

Edit: Still good information

Originally posted August 13, 2019 on AIXchange

Chris Gibson recently tweeted this IBM Support technote on how to manage multiple instances of altinst_rootvg on AIX.

Earlier this summer Chris Gibson tweeted this IBM Support technote that explains how to manage multiple instances of altinst_rootvg on AIX.

Here’s the gist of it:

Question
What is the recommended way of managing multiple instances of altinst_rootvg and applying interim fixes to each of the clones?

Answer
This document describes the recommended preparation and process when managing multiple instances of altinst_rootvg. We will discuss the usage of the ‘-v’ flag for the ‘alt_rootvg_op’ command, applying interim fixes when cloning the rootvg and finally post-checks and FAQ.

What will this document cover?
· Creating of multiple instances of altinst_rootvg and renaming them for convenience and easy management by:
– cloning of the rootvg using the ‘alt_disk_copy’ command
– cloning and updating the rootvg using ‘alt_’ commands
– migrating the rootvg using the ‘nimadm’ command
– renaming of volume group(s) by using the ‘alt_rootvg_op’ command
· Waking up and putting back to sleep cloned/migrated disk(s)
· Applying interim fixes to the migrated and cloned disk(s)

What this document will not cover?
· Most flags used with ‘alt_’ commands
· Alternate disk clone and nimadm pre-checks and requirements
· Any NIM related operations in conjunction with ‘alt_disk_copy’
· The phases of the nimadm process
· Most flags used with the ‘nimadm’ command
· Requirements for nimadm
· Limitations for nimadm

Frequently asked questions

Q: Can I wake up 2 rootvgs at the same time and perform operations on them?
A: No. Only one rootvg can be woken up at a time.

Q: Is it possible to wake up a rootvg that is at a higher oslevel than the rootvg used to initiate the wake up?
A: No. The running system’s operating system must be a version greater than or equal to the operating system version of the volume group that undergoes the wake up.

Q: Is there a limitation of maximum clones supported?
A: No there is no limitation.

Q: Can I update one of the clones in the future to a new Service Pack or Technology level without having to boot in it?
A: Yes. First you would wake up the rootvg undergoing the update and run the following:
#alt_rootvg_op -W -d <hdisk#>
#alt_rootvg_op -C -b update_all -l /<path to updates>

Read the whole thing, then try it out on a text box. And keep in mind that supporting alt disk operations like this is another unique benefit of AIX.

The POWER Investment: Long-Term Viability of the Power Systems Platform

Edit: Some links no longer work

Originally posted August 6, 2019 on AIXchange

In this AIXchange blog, Technical Editor Rob McNelly recaps a presentation by IBMer Nigel Griffiths.

Nigel Griffiths recently gave a presentation on the long-term viability of IBM Power Systems hardware and the POWER processor. He then summarized his points here:

  1. AIX is the UNIX winner in Technology and Performance—IBM having out Researched and out Developed all the other UNIX servers from other vendors over this time.
  2. Linux on POWER performance lead—POWER9 2+ times faster than Intel per core
  3. Accelerated AI and HPC Workloads on the AC922 give world class and the current number 1 and number 2 in the Top500 HPC computers List

This is only a small part of his post, but obviously the idea that IBM servers represent an end-to-end solution is compelling.
 
Of course, processors don’t make a server. With Power Systems IBM makes the servers too and we make sure new POWER processors are designed to make powerful servers by enhancing everything else too: memory, disks, adapters and all the interconnecting buses.
 
Finally, he lays out the investment IBM—and just a couple of its largest customers—have made in this technology:
 

Is IBM going to ditch POWER?

  1. Power Systems is a large successful business. I’m not allowed to detail the numbers of specific parts of IBM (some details are in the IBM Annual Statement) but shall we say it is in the many billions of dollars scale. IBM is not going to shut that down on a whim.
  2. [About] four years ago, I was asked by a worldwide banking client to audit a particular workload on POWER7+ as it was about to move to POWER8. It was only ~ 25 CPU cores but lots of RAM and was running nicely but pretty busy.
  • I was to predict how it would run on POWER8 and any recommended tuning changes.
  • At lunchtime, the client visited my cubical to explain this was an important workload.
  • If the workload server failed for an hour, it would seriously impact the bank’s reputation.
  • If it was not available for a whole day the world economy might not recover!!!
  • “No pressure,” she said as she walks back down the corridor laughing loudly!
  • I started triple checking all my figures!
  • This is the sort of workload clients insist on running POWER servers—it has to fast and it has to be reliable.
  • The server was actually running in a quadruple backup server arrangement and across countries.

3. I am now working on a soak test for a High-Performance Cluster at a racing team (can’t say more). The soak test involves taking the server to 100% busy for 10 days with a 2 day cooling down period in the middle. This, I think, is due to unreliable previous hardware. The client is moving to POWER9 for high performance and super high RAS = Reliability, Availability, Serviceability. It does not breakdown; if in the unlikely case of a break down it stay up running (disabling the fault components); then the faulty components can be replaced online or at a later convenient time). With Live Partition Mobility (LPM) running workloads can be moved while running to a different server. This is all normal for Power Systems.

 While I’ve previously explained why AIX isn’t going away, I want to make you aware of Nigel’s thoughts and some of the reasoning behind them. He covers much more ground, so be sure to read the whole post.

Understanding Network Attributes in VIOS Configuration

Edit: The first post on the new website

Originally posted July 30, 2019 on AIXchange

In this AIXchange blog post, Technical Editor Rob McNelly explains network attributes

This Technote—authored by Darshan Patel, Perry Smith and Saania Khanna—has some important information about VIO server configuration.

I’ll start with their conclusions:

  • Largesend and jumbo frame does two different actions. These actions are mutually independent.
  • Largesend decides who does the segmentation
  • Jumbo frame decides the size of the segmented packet that goes on the wire

This is in response to this query: “I am configuring VIO Servers and AIX VIO Clients on Power system. There are many network attributes related to laregsend and jumbo frame such as largesend, large_send, mtu_bypass, jumbo_frame, use_jumbo_frame. I am confused about these attributes. I want to understand how largesend and jumbo_frame work and what is the difference between two.”

The document includes illustrations and outlines various configuration scenarios:

  • In case 1, VIO server has largesend on, jumbo frames off, VIO client has MBP off, and MTU of 1500. In this case, messages are segmented to MTU size 1500 by TCP Kernel stack in AIX.
  • In case 2, VIO server has largesend on, jumbo frames off, VIO client has MBP on, and MTU of 1500. In this case messages are segmented to MTU size 1500 by Real Ethernet Adapter.
  • In case 3, VIO server has largesend on, jumbo frames on, VIO client has MBP off, and MTU of 9000. In this case messages are segmented to MTU size 9000 by TCP Kernel stack in AIX.
  • In case 4, VIO server has largesend on, jumbo frames on, VIO client has MBP on and MTU of 9000. In this case, messages are segmented to MTU size 9000 by Real Ethernet Adapter.

 This Technote should help you understand the different settings and give you an idea of the results you should expect to see based on how you configure your environment. Be sure to read the whole thing.

Why You Should be Running VIOS 3.x

Edit: Hopefully you have upgraded your VIO servers by now.

Originally posted November 2020 by IBM Systems Magazine

Technical Editor Rob McNelly explains the advantages of upgrading your VIO server.

The PowerVM® Virtual I/O Server (VIOS) provides the capability to virtualize your POWER® servers. It’s the software layer that runs between client VMs and the physical hardware.

Imagine a server running 25 VMs. Prior to the advent of virtualization, these would by necessity be multiple physical servers, each with its own set of network and SAN adapters. Of course, virtualizing physical hardware and sharing that among multiple VMs has obvious benefits. For starters, you eliminate the need for all of those extra boxes while having even greater power and capacity. Beyond that, there’s little need to dedicate a physical adapter to every workload because adapters can be shared most of the time.

VIOS debuted with IBM POWER5 servers running AIX® and Linux® workloads. With the availability of POWER6, IBM i workloads were also supported. Through the years, many administrators have come to rely on VIOS, but not everyone is using the latest versions. VIOS 3.1.0 debuted in November 2018, and the latest update, VIOS 3.1.1, arrived a year ago.

Now Is the Time to Upgrade

However, if you have yet to move to VIOS 3.x, you should do so as soon as possible. With Release 2.2.6 reaching end of life as of October, VIOS 2.x versions are no longer supported without an extended support contract. And continuing to use a release that’s out of support can put your organization at risk. 

Maintaining access to IBM support isn’t the only reason you should be running VIOS 3.x on your servers. It’s important to understand that the latest versions of VIOS are fundamentally different from their 2.x predecessors. One important change is that VIOS 3.x is based on AIX 7.2, whereas VIOS 2.x was based on AIX 6.1. Just as there are advantages to running AIX 7.2 over AIX 6.1, there are advantages to running VIOS 3.x over VIOS 2.x. Most significant is that newer POWER hardware can be better exploited with AIX 7.2—and by extension, the virtualization code that comes with VIOS 3.x. The code base is cleaner, because IBM removed older unused packages.

These changes to the underpinnings of VIOS have necessitated a transformation of the upgrade process. While I wouldn’t say upgrading to VIOS 3.x is more technically difficult than what you’re accustomed to with 2.x, it must be approached carefully. This is something new that requires planning and preparation.

Upgrade Tools 

IBM has developed a viosupgrade tool, and I recommend practicing with it before upgrading your production machines. If you have spare computing capacity, it may make sense to use live partition mobility to evacuate your frames and perform the work on “empty” POWER frames so that running workloads aren’t affected. The upgrade process should be documented, and you should go in with the expectation that physical to virtual mappings, performance settings and more will need to be verified once you’re done.

While a complete explanation of the upgrade process is beyond the scope of this article, detailed information is available online (see “Upgrading Resources,” below). 

And fortunately, you’re not on your own. Your business partner or IBM Systems Lab Services can help you scope out different options. In fact, this may be a good catalyst to examine your entire environment. Is VIOS 3.x supported on the hardware you’re running? Is your HMC in need of an update? How about your system firmware, or your AIX versions? If these different components haven’t been maintained, this project can and perhaps should mushroom into something well beyond a VIOS upgrade. You may even consider upgrading your hardware, as it may be easier to configure new hardware and install VIO 3.x from scratch while migrating your AIX workloads and decommissioning older hardware. 

Upgrading Resources

Power Systems Best Practices Doc: The Latest

Edit: I still love this document.

Originally posted July 23, 2019 on AIXchange

The Power Systems Best Practices document was recently updated. This is the presentation that IBMer Fredrik Lundholm maintains. 

In his email notification, Fredrik requested reader feedback:

I would really like some comments and feedback on VIOS rules, and if we can automate a few more parameters.

I changed into a new presentation format, I hope you like it and please forward any feedback as usual.

Dated June 19, this is version 2.1. As I’ve not written about this for a couple of years, here’s a summary of all the changes from the current and previous updates:

  • Changes for 2.1:New Layout, POWER9 Enterprise, VIOS, AIX recommendation VIOS rules, recommended network and SAN adapters, cleaned out obsolete content.
  • Changes for 1.20: Rootvg failure monitoring in PowerHA 7.2, Default Processor mode.
  • Changes for 1.19: 2018 April Update, POWER9 enablement, Spectrum Scale 4.2 certified with Oracle RAC 12c.
  • Changes for 1.18: 2017 September Update, new AIX default multipathing for SVC.
  • Changes for 1.17: 2017 Update, VIOS 2.2.5, poll_uplinkclarification (edit).

This disclaimer is from Fredrik’s introduction:

While this presentation lists the expected best practices, all customer engagements are unique. It is acceptable to adapt and make implementation deviations after a mandatory review with the responsible architect (not only engaging the customer) and properly documenting these.

Administrators should appreciate this message. Best practices are of course valuable, but rigidly adhering to standards 100 percent of the time won’t cut it. 

Fredrik does a great service, and, as I’ve noted, he has a following. But if you’re not already familiar with this document, take the time to review it. Odds are you’ll learn something.

The Growing AIXchange Rate

Edit: I did not make it another 12 years.

Originally posted July 16, 2019 on AIXchange

After 12 years I guess it’s inevitable, but this blog is no longer the only AIXchange in town. 

Occasionally I do web searches on “aixchange” to locate my old articles. But over the past year or so, these searches have led me to a number of other things. There was the 2018 conference on AIa tour agency offering excursions to Europe and India, the Facebook group for residents of AIX-en-Provence, France, and whatever this company is.

I suppose it’s not surprising that the name would pop up elsewhere online. But on a symbolic level at least, I like to think it means that this particular AIXchange, the blog I’ve authored since July 2007, was in fact a good idea. 

I’d already been contributing articles to IBM Systems Magazine for about three years when Evelyn Hoover, the publication’s content director, approached me about doing a weekly blog. While I don’t precisely recall how we settled on the AIXchange name, I found some old emails where this was discussed. Rejected names include AdminExchange, IT Calibrate and AIXexchange. Gee, that last one is kind of a tongue-twister. 

At first I was pretty apprehensive about writing every week. I wondered how I could continually feed the beast, as it were. But I soon realized that coming up with ideas isn’t that hard. Going in depth on IBM announcements or sharing my experiences helping resolve client issues is natural fodder for this blog. I’ll also get personal from time to time. 

Mostly though I just keep note of interesting information, whether it involves AIX, IBM hardware or really, anything with technology. At any given time my inbox will contain several article drafts. Eventually I’ll finish them up and send them to my editor. 

On that note, I should add that this has always been a team effort. I’m grateful that Evelyn thought of me in the first place. And my editor, Neil Tardy, whose feature articles can be found elsewhere on this site, has been with me from week one. After all this time we’ve developed a shorthand that makes the editing process pretty simple. While we have our occasional back and forths, I generally go with his suggestions because each week, he makes me look good. 

One reason I love doing this is it forces me to pay attention to new products, announcements, tips and tricks, etc. This blog allows me to document things for me to rediscover later, and it allows me to connect with many of you as you seek clarification and/or voice your opinions about my writings. 

So thanks to everyone. Here’s to another 12 years—at least.

A TechU Reminder

Edit: Some links no longer work

Originally posted July 9, 2019 on AIXchange

I’ve written about the IBM Systems Technical University (TechU) conferences many times, most recently following the event in Atlanta in May. As most of our audience is in the United States, I’ll note the fall conference is in October in Las Vegas, but TechU events are scheduled worldwide, so keep this on your radar.

One thing I appreciate about TechU is that they get the little things right. For instance, you can download slides from any session once you get home. Of course presenters typically include their contact info on their presentations, and I’ve found most are willing to respond to follow-up queries. 

For an idea of what went on at the Atlanta conference I attended, see this. Or check out this Twitter graph that analyzed tweets with the #IBMTechU hastag:

The graph represents a network of 493 Twitter users whose recent tweets contained “#IBMTechU”, or who were replied to or mentioned in those tweets, taken from a data set limited to a maximum of 18,000 tweets. The network was obtained from Twitter on Wednesday, 08 May 2019 at 17:12 UTC.

The tweets in the network were tweeted over the 9-day, 1-hour, 44-minute period from Monday, 29 April 2019 at 14:18 UTC to Wednesday, 08 May 2019 at 16:03 UTC.

Additional tweets that were mentioned in this data set were also collected from prior time periods. These tweets may expand the complete time period of the data.

This provides a handy list of people to follow during the conference. I like knowing which sessions are well attended and what conference-goers are interested in, and I join in those conversations. Even if I wasn’t there, I’d still like to know what I’m missing out on. 

Believe me, TechU is worth your time and effort.

Power E980 Offering Makes Capacity Available as Needed

Edit: Still useful to know about

Originally posted June 25, 2019 on AIXchange

Those of you who are running E980 servers should be aware of this announcement:

IBM Power Enterprise Pools 2.0 is a new IBM Power E980 offering designed to deliver enhanced multisystem resource sharing and by-the-minute consumption of on-premises Power E980 compute resources to clients deploying and managing a private cloud infrastructure.

All installed processors and memory on servers in a Power Enterprise Pool (2.0) are activated and made available for immediate use when a pool is started. There is no need to reallocate mobile resources from server to server.

System capacity may be seamlessly made available when it is needed without requiring human awareness or intervention.

Permanent, Capacity Upgrade on Demand processor and memory activations (Base Capacity) and corresponding license entitlements are purchased on each Power E980 system. These Base Processor and Memory Activation resources are then aggregated across a pool.

Unpurchased (inactive upon shipment) processor and memory capacity in the pool is activated when a pool is started and can subsequently be used on a pay-as-you-go basis (Metered Capacity) from Capacity Credits purchased from IBM or an authorized IBM Business Partner.

Processor resource within a pool is tracked by the minute, based on actual consumption by shared processor partitions.

Memory resource within a pool is tracked by the minute, based on the assignment of resources to partitions, not based on operating system usage of the memory.

Metered Capacity consumption on one system may be offset by idle Base Capacity elsewhere in the pool during the same period.

A single Power Enterprise Pool (2.0) may support up to 500 shared processor partitions across up to 16 Power E980 systems within a single enterprise, within a single country.

Each Power Enterprise Pool (2.0) is monitored and managed from a Cloud Management Console in the IBM Cloud.

Capacity Credits may be purchased from IBM, an authorized IBM Business Partner, or online through the IBM Entitled System Support website, where available.

Clients may more easily identify capacity usage and trends across their Power E980 systems in a pool by viewing web-accessible aggregated data without spreadsheets or custom analysis tools.

So rather than manually activate resources and dynamically move LPARs or CPU or memory entitlements around the physical machines in your environment, capacity is simply available. And you use it. 

Depending on which systems and LPARs are busy, you could theoretically establish offsetting usage patterns—e.g., one machine is busy during the day and one handles the off hours. If you exceed your base resources across your whole pool—say, if all systems are busy at once—you’d be billed on a metered basis for the resources you temporarily use. 

Power Enterprise Pools is an interesting solution that’s worth considering for your environment. 

Migrating the Cluster Repository Disk

Edit: Still good stuff.

Originally posted June 18, 2019 on AIXchange

Dino Quintero (@DinoatRedbooks on Twitter) maintains a Redbooks blog on IBM developerWorks. In this post from April, he and Shawn Bodily explain how to migrate the cluster repository disk on PowerHA SystemMirror:

The following procedure is valid for clusters that are PowerHA SystemMirror v7.2.0 and later. Verify cluster level on any node in the cluster by executing the halevel -s command as follows:

TST[root@aixdc79p:/] # halevel -s
7.2.1 SP2

The repository disk is the only disk in the caavg_private volume group and requires special procedures. You do not use LVM on it. It is recommended to run a verification on the cluster prior to replacing a repository disk. If there are any errors, these must be addressed and corrected before replacing the repository disk.

To get started, go through these steps, which are detailed in the post:

clmgr verify cluster
bootinfo –s hdisk#
chdev –l hdisk# -a pv=yes
chdev –l hdisk# -a reserve_policy=no_reserve

Here’s the command to swap the disk:

clmgr modify cluster REPOSITORY=hdisk#

Then verify both disks are repository disks with:

clmgr query repository

Remove the original disk with:

clmgr –f delete repository hdisk#

Then rerun the query command:

clmgr query repository

At this point, verify your new disk is the repository disk. 

Then finally, sync the cluster:

clmgr sync cluster

I wanted to highlight that last part, but there’s a lot more in the actual post. I encourage you to read the whole thing.

Power Overload

Edit: Some links no longer work

Originally posted June 11, 2019 on AIXchange

I attended last month’s IBM Systems Technical University (TechU) conference in Atlanta, and as always, it was an enjoyable and enlightening time. In one of his sessions, Nigel Griffiths had a great slide that challenged attendees’ “street cred”:

Which is right?

Power
Power9
Power 9
Power-9
POWER9
POWER 9
POWER-9

Did you know the answer, or did you need to check the Twitter comments? 

In that same session, Nigel remarked on how the word power has become overloaded. That got me thinking: If I ask if you have enough power, what am I talking about? The POWER processor architecture in general? A POWER-based server? Can you really have enough of either in your environment? 

What about electrical power to your data center? Have you run out of power because you have so much power-hungry gear? What about electrical power to your rack, do you have enough? Is it the right kind of power? 

Of course we have powerful power supplies on our POWER servers. Using the energy management modes, we can adjust the power and performance modes. Maybe you run your Power Systems server in power saver mode. 

You can probably come up with more—and more clever—examples, but Nigel’s point is that, in our world, power has many meanings. I agree—and as someone who gets asked about power all the time, please understand if I request some clarification on occasion. 

I encourage you to check out a TechU event in your area. The fall North American conference is this October in Las Vegas, but events are held worldwide.

DR Solutions and the Need to Keep Pace

Edit: Some links no longer work

Originally posted June 4, 2019 on AIXchange

Chris Gibson recently updated his blog post about using the ghostdev and clouddev flags in the disaster recovery process. 

In his original post, Chris replicated rootvg via his SAN. But since this was written in 2012, an update was needed. Here’s what Chris heard from IBM:

Since one or more of the physical devices will change when booting from an NPIV replicated rootvg, it is recommend to set the ghostdev attribute. The ghostdev attribute will trigger when it detects the AIX image is booting from either a different partition or server. Ghostdev attribute should not trigger during LPM operations (Live Partition Mobility). Once triggered, ghostdev will clear the customized ODM database. This will cause detected devices to be discovered as new devices (with default settings), and avoid the issue with missing/stale device entries in ODM. Since ghostdev does clear the entire customized ODM database, this will require you import your data (non-rootvg) volume groups again, and perform any (device) attribute customization. To set ghostdev, run “chdev -l sys0 -a ghostdev=1”. Ghostdev must be set before the rootvg is replicated.

This is from his update:

If ghostdev is set to 1 and you attempt to use SRR (or offline LPM), the AIX LPAR will reset the ODM during boot. This is (most likely) not desired behavior. If the ODM is cleared the system will need to be reconfigured so that TCP/IP and LVM are operational again. If you require a “ghostdev like” behavior for your AIX disaster recovery (DR) process, I would recommend you set the sys0 attribute, clouddev, to 1, immediately after you have booted from your replicated rootvg. Rebooting your AIX system with this setting enabled will “Recreate ODM devices on next boot” and allow you to reconfigure your LPAR for DR. Once you’ve booted with clouddev=1 and reconfigured your AIX LPAR at DR, immediately disable clouddev (i.e. set it to 0, the default), so that the ODM is not cleared again on the next system reboot. Some more details on clouddev [follow].

Chris concludes with:

If you are looking for a more modern and automated solution for your AIX DR, I would highly recommend you take a look at the IBM VM Recovery Manager for IBM Power Systems. Streamline site switches with a more economical, automated, easier to implement high availability and disaster recovery solution for IBM Power Systems.

Many admins struggle with disaster recovery. Some enterprises roll their own solutions, while others rely on IBM offerings. Frequently admins aren’t up to speed on the latest solution designs and implementation techniques. Quick example: simplified remote start. Are you aware of this capability? Read this and scroll to the bottom of the page for links to related and more detailed videos. 

Disaster recovery solutions are important, but creating and adhering to a DR plan is critical. Regular testing is the only way you’ll know that what you have will work should the need arise.

More PowerAI Resources

Edit: Some links no longer work

Originally posted May 28, 2019 on AIXchange

Following up on last week’s AI-themed post, I encourage you to check the extensive PowerAI documentation available from the IBM Knowledge Center. 

You’ll find instructions on planning, installing frameworks and PowerAI system setup, along with frequently asked questions, a developer portal, and more.  

There are also two new Redbooks that get into more concepts and information around Deep Learning and AI and Big Data.

Here are a couple short excerpts from the deep learning Redbook. First, from page 22 section 2.1: “What is IBM PowerAI?”

IBM PowerAI is a package of software distributions for many of the major deep learning (DL) software frameworks for model training, such as TensorFlow, Caffe, Chainer, Torch, and Theano, and their associated libraries, such as CUDA Deep Neural Network (cuDNN), and nvCaffe. They are extensions that take advantage of accelerators, for example, nvCaffe is NVIDIA extension to Caffe so that it can work on graphical processing units (GPU). As with nvCaffe, IBM has an own extension to Caffe, which is called IBM Caffe. Furthermore, the IBM PowerAI solution is optimized for performance by using the NVLink-based IBM POWER8 server, the IBM Power S822LC for High Performance Computing server, and its successor, the IBM Power System AC922 for High Performance Computing server. The stack also comes with supporting libraries, such as Deep Learning GPU Training System (DIGITS), OpenBLAS, Bazel, and NVIDIA Collective Communications Library (NCCL).

Here’s more from section 2.2:

IBM PowerAI provides the following benefits:

Fast time to deploy a DL environment so that clients can get to work immediately:

  • Simplified installation in usually less than 1 hour
  • Precompiled DL libraries, including all required files

Optimized performance so users can capture value sooner:

  • Built for IBM Power Systems servers with NVLink CPUs and NVIDIA GPUs, delivering performance unattainable elsewhere
  • Distributed DL, taking advantage of parallel processing

Designed for enterprise deployments:

  • Multitenancy supporting multiple users and lines of business (LOBs)
  • Centralized management and monitoring by integrations with other software

IBM service and support for the entire solution, including the open source DL frameworks.

One thing I can tell you from experience: the most recent releases of PowerAI are much easier to install than the earlier versions. And upgrading is simple enough. For instance, I was working on Redhat with an older PowerAI version, so I followed this information to upgrade it:

Taking those steps, in that order, we were able to start working with the latest version of PowerAI.

Automation and AI at Work

Edit: This is still pretty interesting

Originally posted May 21, 2019 on AIXchange

There’s an old saying about there being no free lunches. (Kids, ask your grandparents.) But in the age of AI, apparently that’s no longer the case. Check out this entertaining story about a savvy techie who used AI and Instagram to automatically post information and receive free meals:

I’m going to explain to you how I’m receiving these free meals from some of the best restaurants in New York City. I’ll admit?—?it’s rather technical and not everyone can reproduce my methodology. You’ll either need a background in Data Science/Software Development or a lot of free time on your hands. Since I have the prior, I sit back and let my code do the work for me. Oh, and you guessed it, you’ll need to know how to use Instagram as well….

Some of this may seem like common sense, but when you’re automating a system to act like a human, details are important. The process can be broken down into three phases: content sharing, growth hacking, and sales & promotion….

So how did he do it?

… I needed to create an algorithm that can weed out the bad from the good. The first part of my “cleaner” has some hard-coded rules and the second is a machine learning model that refines the content even further.

I played around with a number of classification algorithms such as Support Vector Machines and Random Forests but landed on a basic Logistic Regression. I did this for a few reasons, first being Occam’s Razor?—?sometimes the simplest answer is the right one. …

I wrote a Python script that randomly grabs one of these pictures and auto-generates a caption after the scraping and cleaning process is completed. Using the Instagram API, I was able to write code that does the actual posting for me. I scheduled a cron job to run around 8:00 AM, 2:00 PM, and 7:30 PM every day.

At this point, I have a complete self-sustaining robotic Instagram. My NYC page, on its own, is finding relevant content, weeding out bad potential posts, generating credits and a caption, and posting throughout the day. In addition, from 7:00 AM to 10:00 PM, it is growing its presence by automatically liking, following, and unfollowing with an intrigued audience which has been further redefined by some data science algorithms.

And in conclusion:

Due to the power of AI, automation, and data science?—?I am able to sit back and relax while my code does the work for me. It acts as a source of entertainment while at the same time being my salesman.

I hope this helps inspire some creativity when it comes to social media. Anyone can use these methods whether they are technical enough to automate or if they need to do it by hand. Instagram is a powerful tool and can be used for a variety of business benefits.

I’ve skipped most of the details, so by all means, read the whole thing. Crazy as it sounds, it’s a fantastic example of what can be accomplished with machine learning and artificial intelligence.

The lshwres Command and Hardware Discovery

Edit: Some links no longer work

Originally posted May 14, 2019 on AIXchange

Recently, a friend was trying to get the lshwres command to work in his environment. 

I’ve previously written about using the HMC command line to get information from managed machines. It’s a terrific use of the HMC, especially if you’re working with new machines and your OS isn’t loaded yet. Even in established environments, the HMC command line makes everything easier. Why bother with logging into multiple VIO servers or LPARs to get information? 

In my friend’s case, he was running a loop. First he used lssyscfg to get the system names. Then he fed those names into the lshwres command. It was simple enough; he wanted to collect WWNs for his SAN team so they could get them zoned appropriately:

lshwres -r virtualio -m <machine name> –rsubtype fc –level lpar -F lpar_name,wwpns

However, for some reason, only the two new E950 machines were providing the expected output. Meanwhile, nothing was happening with the two new S924 machines. “No results were found” was the only response from them. 

A web search on the lshwres command returned this information from IBM developerWorks:

If the lshwres commmand displays the message:

No results were found

then hardware discovery has not yet been done. The managed server must be powered off and back on with the hardware discovery option. To power on a managed server with hardware discovery from the HMC command line, use the HMC command chsysstate -m xxx -r sys -o onhwdisc, where xxx is the same as above. Or choose the Hardware Discovery option in the HMC Power On window, assuming the Hardware Discovery option is offered by the HMC GUI.

Please note that when powered on with the hardware discovery option, a server takes longer to get to partition standby mode than without the option. And once in partition standby mode, a temporary LPAR for hardware discovery is created and started, which takes additional time.

This IBM Support Center document has more about hardware discovery

Sure enough, once we powered off the machines and restarted the frames with the onhwdisc option, the S924s also gave the expected output.

Setting the Record Straight on Power Systems

Edit: Still good information

Originally posted May 7, 2019 on AIXchange

There’s an IBM-produced blog about Power Systems servers that’s worth bookmarking. This introduction explains it well:

Here are some of the most common myths we hear in the marketplace today:

  • Power has no cloud strategy.
  • Migrating to Power is costly, painful and risky.
  • x/86 is the de-facto industry standard and Power will soon be obsolete.
  • Power solutions are more expensive than x/86 solutions.
  • Linux on Power operates differently and is managed differently than Linux on x/86.
  • x/86 is the best platform to run SAP HANA, Nutanix and open source databases like Mongo DB, MariaDB and EnterpriseDB.
  • Reliability, availability and serviceability (RAS) features are no longer a differentiator because every platform is the same.
  • Oracle software runs better on Exa and/or Sparc systems than it does on Power.
  • Power is a closed, proprietary architecture.
  • The OpenPOWER Foundation is weak and not really important to anyone in the industry.

There are regular updates. In particular, I loved this post from March that addresses the perception that x86 is the industry standard:

To begin breaking down this myth, let’s consider how IBM Power Systems stands apart from x86.

Designed for enterprise workloads. x86 is designed to accommodate multiple markets and design points, from smartphones to laptops, PCs and servers. Power Systems, on the other hand, is designed for high-performance, enterprise workloads like data analytics, artificial intelligence and cloud-native apps and microservices—workloads that are driving innovation and digital transformation in organizations today.

Targeting new market segments. Over the years, x86 vendors shipped a lot of systems into commodity markets, but there have always been market segments it couldn’t get because of the limitations of its general-purpose architecture.

Today, a growing number of market segments where just a few years ago x86 was the only solution available, are facing strong competition from Power Systems. Consider the number of clients who bought x86-based solutions for SAP HANA, Nutanix and open source databases like MongoDB, EDB PostgreSQL and Redis, to name a few. They didn’t buy x86 solutions because they were the best choice; they bought them because they were the only choice. SAP HANA is an excellent example. 2,500-plus clients now run this application on Power Systems instead of x86.

These applications, plus the rising demand for data analytics, HPC infrastructure and cognitive solutions like AI, machine learning and deep learning, may be the most cogent examples of market segments x86 is struggling to keep.

On the forefront of high-performance computing. In addition, two of the world’s most powerful supercomputers are running IBM POWER9: the US Department of Energy’s Summit and Sierra at Oak Ridge National Laboratory in Tennessee and Lawrence Livermore National Laboratory in California.

Growing revenue. Far from being pushed out of the market, IBM Power Systems has enjoyed five consecutive quarters of growth driven by client adoption of the latest generation of Power processors, IBM POWER9.

As I said, there’s much more. I cite this information because, in our world, perception is often reality. As users of IBM solutions, we need to be doing our part to help educate those around us about the real-world value of Power Systems.

Text from AIX

Edit: Do you do anything similar?

Originally posted April 30, 2019 on AIXchange

As I’ve noted previously, I regularly visit the AIX Forum. Generally there’s good discussion, and occasionally an interesting question is raised. For instance, about a month ago a forum member asked about sending texts from AIX

The first reply noted that the curl command can be used for this:

curl http://textbelt.com/text -d number=123456789 -d “message=hello from me”

Alternatively, you could email your provider (assuming they have an SMS gateway). For instance in the U.S. Verizon allows you email the 10-digit mobile number followed by @vtext.com. Messages are limited to 160 characters, including the subject line and recipient’s email address. To include an attachment, enter the recipient’s 10-digit number followed by @vzwpix.com. 

AT&T offers something similar. Information about other carriers, along with browser add-ins, can be found here

So why text through AIX in the first place? Administrators often do this as part of a notification process. If there’s an error in the system, the admin receives an automated text message. Or maybe you want to know when a job completes. 

Maybe you want to set up some reminders in your crontab. Although this example runs on Linux, something similar can be set up on AIX. 

Of course you must weigh the benefits of getting notified via SMS vs. an email notification, but it’s always nice to have options. 

Updating Old Firmware with the OpenBMC Tool

Edit: Have you run into this?

Originally posted April 23, 2019 on AIXchange

I was recently upgrading the firmware on an AC922 server when I realized that the firmware was old enough that no GUI was available for the task. 

Now, for those of you dealing with more recent releases, firmware can be updated using the OpenBMC GUI, which is explained on page 9 of this PDF. Simply connect to your web browser’s BMC IP address and you’re set. 

In my case, I needed the OpenBMC tool. Learn the basic commands and functionality here; download the tool here. Page 3 of the aforementioned PDF outlines the procedure for updating your firmware. 

I was running this command (where bmc or pnor is the type of image being flashed to the system):

openbmctool -U <username> -P <password> -H <BMC IP address or BMC host name> firmware flash <bmc or pnor> -f xxx.tar

My problem was it kept failing when copying the .tar file from my machine to the BMC. Fortunately, this alternative method allowed me to update the firmware. I’d scp the files over to the BMC in the /tmp/images directory. The solution would automatically decompress the files and make them available for use. 

From there I was able to use the curl commands referenced in the GitHub link above and consult the REST-cheat sheet. 

One tricky issue I ran into with curl is that it stores cookies on your machine. After running the commands a few times, they stopped working due to stale cookies. So I had to delete the cjar file in my directory and log back in to get the updates to work. 

Once I got the hang of scp/curl method and my system was updated, the latest version of firmware got my GUI working. So it’s possible I won’t need to be doing these updates manually going forward. Nonetheless, I wanted to share this information so you’d at least have a starting point should you run into these issues when updating the firmware on your machines.

Machine Learning on AIX

Edit: Have you had a chance to try this?

Originally posted April 16, 2019 on AIXchange

If you believe that machine learning is strictly for Linux, check out this IBM tutorial on installing and configuring Python machine learning packages on AIX:

Machine learning is a branch of artificial intelligence that helps enterprises to discover hidden insights from large amounts of data and run predictions. Machine learning algorithms are written by data scientists to understand data trends and provide predictions beyond simple analysis. Python is a popular programming language that is used extensively to write machine learning algorithms due to its simplicity and applicability. Many packages are written in Python that can help data scientists to perform data analysis, data visualization, data preprocessing, feature extraction, model building, training, evaluation, and model deployment of machine learning algorithms.

This tutorial describes the installation and configuration of Python-based ecosystem of machine learning packages on IBM AIX. AIX users can use these packages to efficiently perform data mining, data analysis, scientific computing, data plotting, and other machine learning tasks. Some of these Python machine learning packages are NumPy, Pandas, Scikit-learn, SciPy, and Matplotlib.

Because all these packages are Python based, the latest version of Python needs to be installed on the AIX system. YUM can be used to install Python on AIX or it can be directly installed from AIX toolbox. This tutorial talks about Python3 but same should work for Python2 as well. You need to have python3-3.7.1.-1 or later version of Python from AIX toolbox to run these machine learning packages.

In this tutorial, we use a Python package management tool called pip to install these machine learning packages on AIX. These packages are compiled as part of pip installation because binary versions of these packages for AIX are not available on the Python Package Index (PyPI) repository.

You’ll also find detailed instructions for installing on your system. In addition, there are several related tutorials covering topics like the Scientific Computing Tools for Python, NumPy, Scikit Learn, Project Jupyter, and YUM on AIX.

An Introduction to Problem Analysis

Edit: More good information when you do not know where to start.

Originally posted April 9, 2019 on AIXchange

The IBM Knowledge Center has a number of documents on problem analysis for AIX, Linux and more. While this information may seem basic to anyone’s who’s spent years dealing with these types of issues, junior admins should spend some time with beginning problem analysis, and all of the other info linked in this document. 

The doc on AIX and Linux Problem Analysis starts with these tips: Remember the following points while troubleshooting problems:

  • Has an external power outage or momentary power loss occurred?
  • Has the hardware configuration changed?
  • Has system software been added?
  • Have any new programs or program updates (including PTFs) been installed recently?

There are also tutorials on IBM i problem analysis and light path diagnostics on Power Systems

Finally, there’s a problem reporting form.  

Credit to Kiran Tripathi on Twitter, who pointed me toward these docs.

Choosing the Proper Level for Managing VIOS with NIM

Edit: These pages are still worth bookmarking.

Originally posted April 2, 2019 on AIXchange

Here’s an IBM document on VIO server to NIM mapping (courtesy of Chris Gibson on Twitter). 

The chart shows you which levels are needed for your NIM master to manage your VIO servers. Particularly since the update to VIOS 3.1, it’s critical that your NIM master is at the correct level. 

While I’m talking about IBM documents, here are some other pages worth bookmarking: 

 Just poking around on IBM support websites can be productive. You never know what information you might uncover that will help you somewhere down the line.

Comments on the Changing UNIX Landscape

Edit: It is still a significant slice of the POWER business.

Originally posted March 26, 2019 on AIXchange

I was quoted in this recent NetworkWorld article on the slow decline of UNIX. 

You’ll have to register to read the whole thing, but I want to hit some highlights:

Most of what remains on Unix today are customized, mission-critical workloads in fields such as financial services and healthcare. Because those apps are expensive and risky to migrate or rewrite, Bowers expects a long-tail decline in Unix that might last 20 years. “As a viable operating system, it’s got at least 10 years because there’s this long tail. Even 20 years from now, people will still want to run it,” he says.

The gist of the article is that IBM is the last company standing in the UNIX space. That sounds pretty dire, but the tone changes when IBM executive Steve Sibley is quoted. He notes that 10 years from now the company will continue to have a substantial number of AIX clients, the majority of which will be Fortune 500 clients. Here’s the part where I come in:

“No one buys a platform for the platform,” McNelly says. “They buy an application. As long as application support remains for some key platforms, it’s hard to beat the value of AIX on [IBM Power Systems]. Many times after companies do some analysis, [and consider] the current stability and the migration effort, [it] makes no sense to move out of something that’s perfectly functional and supported and has a strong roadmap into the future.”

To elaborate, the beauty of IBM Power Systems hardware is that it’s positioned to run whatever application and operating system you want to run: AIX, IBM i or Linux. As stated in the article, these large, powerful systems are designed for uptime and resiliency, but this focus does not come at the expense of enabling smaller nodes intended to run Nutanix or smaller IBM/OpenPOWER servers. IBM runs the world’s fastest supercomputers while still providing large enterprise systems with capabilities like capacity on demand and virtualization that competitors cannot.

A Look at AIX and Cloud

Edit: Some links no longer work.

Originally posted March 19, 2019 on AIXchange

I’m quite late to this, but if you haven’t caught Petra Bührer’s Power Systems Virtual User Group presentation, “Enterprise Cloud Bundle and AIX Enterprise Edition,” check it out. (Download the slides and watch the video.) A couple highlights from the Jan. 31 broadcast: 

  • In slide 26, Petra goes over the AIX roadmap and explains why IBM is sticking with 7.2 as the current AIX version. (Spoiler: Now that new functionality is made available through technology levels and service packs, there’s no technologically driven need to update the AIX version number at this time.) She also notes upcoming AIX end of service/end of life dates: April 2019 for AIX 5.3 and April 2020 for AIX 6.1. In addition, POWER5, POWER6 and POWER7 hardware end of service arrives in 2019.
  • In slide 27, Petra discusses the roles of AIX and Red Hat going forward.

 Petra also has her own perspective on AIX’s long-term viability (a favorite discussion topic for yours truly: herehere and here). Her July 2018 IBM Systems Magazine article is also worth your time. As I’ve often said, the Power Systems VUG does a great job of providing detailed information on an array of topics. If you can’t catch these presentations live, you can always go back and dig into the replays.

Logging in NPIV from the HMC

Edit: More than one option is always useful.

Originally posted March 12, 2019 on AIXchange

Back in 2013 I wrote about using the chnportlogin and lsnportlogin commands to display and change N_Port ID virtualization (NPIV) mappings. This same operation can be accomplished using the HMC, which came in handy for me when I was recently asked how to get secondary World Wide Names (WWNs) logged in to use for live partition mobility:

A login operation may need to be initiated to facilitate SAN Administrators zoning of new virtual WWPNs (vWWPN), including all inactive WWPNs (2nd WWPN in the pair), which are used in Partition Mobility environments.

When performing a login operation, all inactive WWPNs will be activated, including the second WWPN in the pair assigned to each virtual Fibre Channel client adapter. When performing a logout operation, all WWPNs not in use will be deactivated.

To successfully log in a virtual Fibre Channel client adapter, the corresponding virtual Fibre Channel server adapter must exist and it must be mapped.

The primary intent of the login operation is to allow the system administrator to allocate, log in and zone WWPNs before the client partition is activated. With best practices, the WWPNs should be logged out after they are zoned on the Storage Area Network (SAN) and before the partition is activated. If a partition is activated with WWPNs still logged in, the WWPNs used for client access are automatically logged out so they can be logged in by the client.

The login operation can also be used to zone the inactive WWPNs in preparation for a partition mobility operation. If the login operation is performed when a partition is already active, only the inactive WWPNs are activated to the “constant login” state similar to physical Fibre Channel adapters. The WWPNs that are already in use by the virtual Fibre Channel client adapters remain in control of the virtual Fibre Channel clients and are not under the control of this command. This means that active client virtual Fibre Channel WWPNs do not achieve a “constant login” state similar to physical Fibre Channel adapters.

The login operation can interfere with partition mobility operations. Best practice is to perform a logout operation for a partition before attempting to migrate the partition to another server. If a mobility operation is attempted with WWPNs still logged in, the firmware will attempt to automatically log out the WWPNs. However, in some cases, the logouts may not complete in time and may therefore cause the mobility operation to fail.

This IBM Support doc was last modified in December 2016, so the screen shots depict the classic HMC interface. Nonetheless, it’s a good starting point should you need to manually log in ports.

Asking the Right Questions

Edit: It always makes sense to think before you speak.

Originally posted March 5, 2019 on AIXchange

I love the approach expressed in this tweet

Brad Geesaman @bradgeesaman

Instead of asking “Why didn’t you just use X?” Ask: “Was solution X considered?” You’ll 9/10 times get a really good reason and 10/10 times not make yourself sound arrogant and accusatory.

Have you ever wondered how a particular implementation was ever approved? Why was this choice made instead of something simpler or easier? 

Short answer: Often it isn’t that simple. It’s important to understand that, even when the technical solution seems obvious to you, there may be political or other considerations in play behind the scenes that you know nothing about. 

It may seem simple to you: What do you mean you didn’t mirror that logical volume to begin with? What do you mean you never tested your backups before today? What do you mean you only gave 0.1 CPU to that VIO server? Why-as stated in the tweet-didn’t you just do X? 

It’s important though to be open to other possibilities. Some answers may surprise you.  

Sometimes systems that are set up as test machines morph into production machines, and decisions that were perfectly fine for testing weren’t revisited. Obviously there could be skills gaps; those involved did the best they could with the information that they had at the time. Beyond that, requirements change; what once worked great will no longer cut it. Maybe sufficient resources are lacking, either in hardware or personnel, to implement requests. I’ve seen situations where technical employees are overruled and a non-IT decision maker dictates system configuration. 

There could be a hundred reasons why your “no-brainer” solution to this obvious problem wasn’t used. Part of our job is to understand and deal with the constraints that are in place. It’s not our place to simply chime in with some quick fix. Especially when you’re being brought into a new situation, make sure you take the time to really listen before making suggestions, and make sure your questions are the right ones. 

Remember, things change. A few years from now, someone may walk in and wonder about the solution you implemented. “Well why didn’t he (you) think of this?

Getting Started with AIX System Files

Edit: Hopefully this is just a review.

Originally posted February 26, 2019 on AIXchange

Awhile back Shivaprasad Nayak tweeted about AIX system files. 

Here’s a glimpse from the IBM Knowledge Center:

The files in this section are system files. These files are created and maintained by the operating system and are necessary for the system to perform its many functions. System files are used by many commands and subroutines to perform operations. These files can only be changed by a user with root authority.

There are three basic types of files:

All file types recognized by the system fall into one of these categories. However, the operating system uses many variations of these basic types.

Regular files are the most common. When a word processing program is used to create a document, both the program and the document are contained in regular files.

Regular files contain either text or binary information. Text files are readable by the user. Binary files are readable by the computer. Binary files can be executable files that instruct the system to accomplish a job. Commands, shell scripts, and other programs are stored in executable files.

Directories contain information the system needs to access all types of files, but they do not contain the actual file data. As a result, directories occupy less space than a regular file and give the file-system structure flexibility and depth. Each directory entry represents either a file or subdirectory and contains the name of a file and the file’s i-node (index node reference) number. The i-node number represents the unique i-node that describes the location of the data associated with the file. Directories are created and controlled by a separate set of commands.

Special files define devices for the system or temporary files created by processes. There are three basic types of special files: FIFO (first-in, first-out), block, and character. FIFO files are also called pipes. Pipes are created by one process to temporarily allow communication with another process. These files cease to exist when the first process finishes. Block and character files define devices.

Scroll down and you’ll see a list of many files you should be familiar with.

Also check out the parent web page, “Files Reference”:

This topic collection contains sections on the system files, special files, header files, and directories that are provided with the operating system and optional program products. File formats required for certain files that are generated by the system or by an optional program are also presented in this topic collection.

This is all good information, so I wanted to pass it along.

A Lifetime Champion

Edit: I am still happy to be part of the community of Champions.

Originally posted February 19, 2019 on AIXchange

Last month, the IBM Champions program announced its honorees for 2019:

The IBM Champions program recognizes and rewards external experts and thought leaders for their work with IBM products and communities. The program supports advocates and experts across IBM in areas that include Blockchain, Cloud, Collaboration, Data & Analytics, Security, Storage, Power, Watson IoT, and IBM Z.

An IBM Champion is an IT professional, business leader, developer, or educator who influences and mentors others to help them innovate and transform digitally with IBM software, solutions, and services.

From the nominations, 635 IBM Champions were selected…. Among those are:

  • 65% renewing; 35% new Champions
  • 39 countries represented
  • 9 business areas, including Data & Analytics (31%), Cloud (22%), Collaboration Solutions (15%), Power Systems (9%), Storage (7%), IBM Z (6%), Watson IoT (4%), Blockchain (2%), and Security (3%)

As always, I’m happy to be a part of the IBM Champions community. It turns out though that there’s a bit more to the story. At last week’s IBM Think conference, I was one of eight new recipients of the IBM Champion Lifetime Achievement award (video here). 

It’s an incredible honor, and I only wish I could have been there in person. The IBM Champion Lifetime designation, “recognizes IBM Champions who stand above their peers for service to the community. Over multiple years, these IBM Champions consistently excel and positively impact the community. They lead by example, are passionate about sharing knowledge, and provide constructive feedback to IBM. The Lifetime Achievement award provides automatic re-nomination into the IBM Champion program for the duration of the program, plus other benefits.” 

Please allow me to reiterate a couple of familiar points: 1) it means a great deal to be recognized for my contributions and 2) without this blog and those of you who read it, I’m not sure this achievement would be possible. 

Yes, I also use Twitter (@robmcnelly) to help inform and educate AIX/IBM Power Systems users, but most of my time and energy is spent posting to this blog. I’m especially grateful to all who frequent AIXchange and share their insights. Thank you, again, for taking time out of your busy days to read what I write.