When you have a one-off command to run on AIX, you just type it in. If you have to do it hundreds of times, you are better off if you script it. But what if the scripting still isn’t fast enough?
I recently had to remove hundreds of virtual SCSI (vSCSI) devices from a Virtual I/O Server (VIOS). When logged into the VIOS as the user “padmin”, the command was pretty simple:
rmvdev –vtd vtscsi1
Then I had to remove the underlying backing device:
rmdev –dev hdisk1
So far, so good. The only problem was that this was a very busy VIOS with many hundreds of vSCSI devices.
Unfortunately, the VIOS seemed to be under-resourced, so each rmdev command took a few seconds. Making changes to the resources on the VIOS to reduce bottlenecks was well outside my domain of influence as this job was in a very large organisation. Sometimes you can’t weed the entire paddock; you just have to work on the patch you’re given.
Clearly, the rmdev task I had to do was a bottleneck. A few seconds multiplied by hundreds of LUNs meant a lot of waiting around. To add to that, there was a second VIOS for redundancy, and the vSCSI device removal had to be repeated there.
I had a few options. One was to write a script and leave it running. Unfortunately, this was on a very large enterprise environment, and we needed to provide confirmation from the storage team and the users at the end of the successful rmdev commands. Did I mention that this all had to be done after-hours? So I was highly motivated to find a faster way.
Option two was to run the rmdev commands in parallel. Perhaps I’m too old-school here but I don’t like the idea of running two or more rmdev commands simultaneously on the same operating system. I had some vague idea that the Object Data Manager – the database of system and device information – wouldn’t react kindly to multiple rmdev commands knocking on its door simultaneously.
Which got me thinking.
I noticed that when the rmdev command was being run, a process kept appearing in the process list called “savebase”. According to the command documentation, savebase “saves information about base-customised devices in the Device Configuration database onto the boot device.”
So every time we were running rmdev, by default the savebase was updating the boot device. And this took time. A few seconds. But when it’s hundreds of commands, a few seconds is multiplied hundreds of times.
How about disabling the savebase command and just running it at the end of the rmdev commands?
As it turned out, this was a quick win. Disabling the savebase command was easy. You just needed to set an environmental variable:
Then run the series of hundreds of rmdev commands, then run the savebase command at the end.
to see the savebase in verbose mode. Whoops! The savebase failed because of my ENV variable. Easily fixed:
Then, once again:
The results were remarkable. The rmdev commands whizzed through (next time I should compare the timings) and we were all able to finish a lot earlier.
I don’t see any reason why this method can’t be used for other commands such as chdev to change device attributes, or mkvdev to create virtual target devices. However, you would need to check with IBM to ensure the procedure is supported in your environment.
In the end, the one-off savebase at the end saved me a few hours of waiting around for a script to finish. That meant that storage teams and end users could also clock off much earlier, all thanks to a single ENV variable.
IBM Knowledge Centre: savebase command
IBM Knowledge Centre: Other common run-time configuration commands
Anthony English has worked on IBM Power Systems and their predecessors since the first commercial release of AIX. He is a well-recognised author in the field of IT, and has some unique perspectives on technical and business-related topics. Anthony is based in Sydney, Australia with his wife and seven young children.