Informing the IBM Community

How to monitor client LPAR vSCSI disks in VIOS



Many customers use vSCSI disk-mapping in VIOS to present external storage volumes to their client LPARs. There are proponents of vSCSI, NPIV and Shared Storage Pools. These all work very well.

I find that AIX-minded technicians will lean towards NPIV because that’s what they have used traditionally. IBM i technicians will tend to use vSCSI as, until recently, NPIV was supported on a limited number of storage platforms with IBM i.

Shared Storage Pools (SSP) have been around for about four years now. This method is a more attractive option than vSCSI and NPIV as it has a number of advantages. SSPs also emulate the way VMware uses external storage so storage guys will be comfortable with the way they present volumes to hosts.

Customers that have implemented VIOS for the first time are always suspicious that it may be a bottleneck for their LPAR storage operations. In my experience, I have never seen this as a bottleneck. So how do I show a customer the disk-utilisation statistics for their virtual environment and help to pacify their worried minds?

I regularly use the nmon tool which comes as standard with VIOS and AIX. For IBM i users this is like WRKSYSSTS, WRKACTJOB, WRKSYSACT and Performance Tools all rolled into one nice little package. Here’s an example of what is displayed when you run nmon without any parameters:

nmon screen

As you can see, you get a great overview of the system. If you press the “?” key you will get a help display.

nmon question mark screen

Hitting “c” will display the CPU utilisation of the VIOS LPAR. Hitting “c” a second time will switch off the CPU Utilisation display. Most of the keys work as on/off toggles.

You can select multiple options. For example, “c” followed by “m” to display CPU and memory information together. Press “q” to quit and that completes the nmon lesson.

Disk groups

An interesting option I investigated further a few months ago is “g = User-Defined-Disk-Groups”. When you press “g”, you get a message saying “No disk groups found”. On further investigation, I found that this option requires a file name to be provided to nmon when we run the nmon command.

The file we provide to nmon should be in the following format:


This got me thinking about how I could use this capability for my client LPARs. I figured I could use a file layout such as:

LPAR1 hdisk10,hdisk11,hdisk12,hdisk13,hdisk14,hdisk15
LPAR2 hdisk16,hdisk17,hdisk18,hdisk19,hdisk20
LPAR3 hdisk21,hdisk22,hdisk23,hdisk24,hdisk25,hdisk26,hdisk27,hdisk28

I created the above file as /tmp/lpar.hdisks. The hdisk names are taken from the vhost adapters for each LPAR. So If I run:

lsmap -vadapter vhost0

I will get a list of the six disks mapped to LPAR1 and the hdisks associated with those mappings. I used this information to populate my /tmp/lpar.hdisks file.

Now if I run:

nmon -g /tmp/lpar.hdisks

I will get the nmon summary display. But when I press “g”, I get:

nmon g screen

As you can see, nmon has translated the data from /tmp/lpar.hdisks and displays the performance data for the group of hdisks per LPAR.

I wanted to dig a little deeper so I created a file called /tmp/1lpar.hdisks and added the following to the file:

LPAR3_L01 hdisk21
LPAR3_L02 hdisk22
LPAR3_L03 hdisk23
LPAR3_L04 hdisk24
LPAR3_L05 hdisk25
LPAR3_L06 hdisk26
LPAR3_L07 hdisk27
LPAR3_L08 hdisk28

Column 1 is the name of the disk mapping using my standard naming convention and column 2 is the hdisk associated with the mapping.

Then I ran:

nmon -g /tmp/1lpar.hdisks

and hit the “g” key at the nmon summary screen and I got:

nmon g2

Great. All eight disks for this LPAR are displayed individually.

If you have dual, redundant VIOS implemented (if not, why not?), you’ll need this file on both VIOS LPARs and you’ll need to run the nmon command on both VIOS LPARS too.

Next steps

You can see from above that I have manually created the files used by nmon -g. In most environments this is both impractical and error-prone.

I set about finding ways to generate the files using a script. I had some challenges with the script as the naming convention used to create the mapped disks impacts how the data is displayed in nmon and the method by which a file can be generated.

In my next two articles, I will discuss and provide the scripts I created to work with a specific naming convention and for any VIOS environment.


IBM DeveloperWorks wiki: nmon for AIX Performance Monitoring
IBM Redbooks: IBM PowerVM Virtualization Managing and Monitoring
Video by IBM’s Nigel Griffiths on YouTube: Shared Storage Pool 4 (SSP4) Concepts 

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.