Continuing my recent system performance theme, the next IBM i 7.2 function I’d like to discuss is called Batch Model. Perhaps not the most descriptive of names but, mercifully, it is short. And, despite its name, it does in fact analyse all types of workload, not just batch.
When I first heard about it, I feared that IBM would (as it all too often does) give it a hideous name, something like System Director Workload Estimation Performance Modelling Navigator for IBM i. Is it just me, or does anyone else out there believe that Big Blue’s nomenclature department gets paid by the word?
The idea behind the Batch Model tool is simple. You allow it to look at the performance data collected from your system and it creates a model of your existing hardware and workload. It then allows you to make a number of “what if?” changes to that model and shows you how much better (or worse) your system would be.
Let’s say you were considering a disk upgrade on your system. You could tell the Batch Model tool to factor the extra disks into your current workload and it would quickly show exactly what difference that disk upgrade would make.
The following screenshot shows how you could change a Batch Model to reflect the proposed disk upgrade. You will see the simple “before” and “after” description of disk arrays. Here, we had 19 disks before and 32 after.
If we now look at what it actually tells you, this is where Batch Model really scores highly. It does this using easy-to-understand graphs that show “before” and “after” details of your workload. Or, if you are a little more hardcore, you can export tables of data detailing the differences and go off and analyse them yourself.
Note: you can pretty much go in as deep as you want to but, be warned, if you start to go too deep it can make your brain itch as some of that data really is designed for the uber-geek. My experience to date leads me to believe that the average user could get 90% of everything they wanted to know from the first views that the model offers.
Let’s start with the “before” statistics that IBM calls Measured Resource. This shows how your system is working right now. In here, we can see a summary of a typical day. We have two lines on the graph. The green shows how busy the disks are, the purple how busy the processor is.
In this example, we can deduce that the upgrade would halve the disk utilisation at peak times and drop it to about a third of the original in more normal periods. This is a great indicator for system performance as a whole but, as I mentioned, if you want to dive deeper you can do that too.
Next, let’s drill down further and look at the effect this upgrade would have on some individual jobs at a busy time of the day. Here is a sample from one of the peaks you see above at around 1.30pm. Just like before, we get to see how the jobs are running on the system as it stands. This is our “before” or Measured Workload Timeline Overview.
You’ll notice that you don’t just get a list of jobs and times. It actually breaks down the total execution time into sub-components: queuing for CPU, execution on CPU, queuing for disk, service time on the disk. It even knows about other waits like database and journaling. This model is not just a guess, it really is based on hard data gathered from your system.
Then we get to see the same jobs after the upgrade as the Modeled Workload Timeline Overview so we can see what difference the upgrade would actually make to each job.
In this case, we see the amount of red on the left is much smaller. This show the jobs finishing faster because they are spending less time queuing for disk I/O.
What difference will that upgrade make?
When talking to clients about their options, the most common question I get asked has to be: “What difference will that upgrade make?”. And rightly so. These hardware upgrades are certainly not free. With this tool you can quickly see what difference a specific hardware change would make to your own unique workload.
You can change the Batch Model to show the difference the following changes would make:
• adding another processor
• adding more HDD disk arms
• adding SSDs
• upgrading to a different model or generation of server
• increasing your current workload by a given percentage
• moving a workload from one timeslot to another.
This means that, unlike previous IBM tools (like Workload Estimator), you don’t just feed in summary details based on your best guess and in return get a one-size-fits-all result along with an IBM disclaimer telling you the figures supplied are not worth the paper they are printed on.
Instead you get something detailed, flexible and based on your specific system’s performance data. I can’t yet say hand-on-heart that I know this tool to be 100% accurate but, having looked at the output, it is entirely in line with what I would expect. If you put this together with the fact that IBM does not make you accept a disclaimer before using it, then it too must feel very confident.
You base your Batch Models on specific timeframes of performance data. Typically, you would select a particular day as a starting point. I would suggest that you create a couple of models – one representing a typical day and one that represents your busiest – then you can see what difference an upgrade would make to both.
This is just a recommendation. If you don’t have much time or need to share your findings with someone with a short attention-span, just model your busiest time.
Batch Model truly allows you to take back control of planning your upgrades. You no longer have to trust that your friendly business partner actually knows what (if any) upgrade you genuinely need.
You can also use it to work out how much more you can add to your existing server without crippling it. For example, if your boss turns round and says: “I want to add another 50 users to the system, will it handle it?”, with the Batch Model tool you can check yourself and give him/her the hard facts to back it up.
Furthermore, you can do your own what-if calculations in your own time as often as you like and at no cost. These will allow you to factor in new technologies, changes in workload and – if you add in a little thought about your own budget, current maintenance contracts and planned obsolescence – you will soon figure out not only what would be a good cost-effective upgrade but also when you should do it.
One of the best things about the Batch Model tool is that you can run it against data from other systems. What’s more, these systems don’t have to be running IBM i 7.2. You can save data from a 6.1 or 7.1 system, restore it to a 7.2 system and then model it. So if you are wondering what difference an upgrade might make to you, why not give your friendly IBM business partner a nudge and ask them to model your performance data for you.
There is so much more in this tool, I really don’t know where to start but I think it is worth at least one more article and I would welcome your input as to where to focus next. You can contact me via the feedback below or through my website.
End of life for IBM i 6.1
IBM has announced it will remove IBM i 6.1 from sale in December, 2014, and then withdraw support for it in September, 2015. This does not mean that your applications will stop working if you don’t upgrade by then but if IBM discovers any operating system issues after that date it will not PTF them.
There may be more pressing reasons to upgrade. For example, if you have to handle credit card data on your IBM i, you will need to do so on a supported level of the operating system. Or maybe, just maybe, you are one of my favourite types of user and you want to keep your system up-to-date just because you care about it.
If you are on 6.1 or older there has never been a better time to upgrade. The functions packed into versions 7.1 and 7.2 are truly legion.
A quick tip. Whenever you upgrade IBM i, you are able to skip a release. So if you are running 6.1, it is the same process whether you upgrade to 7.1 or 7.2. There is no need to upgrade from 6.1 to 7.1, then to 7.2. This will save you hours and, as I mentioned in a previous article, really could improve your application performance without you changing anything else.
Nice to see you…
It was great to see so many of you at September’s i-UG event in Rochdale. Our next meeting will take place at IBM Warwick on November 20. We’ve already confirmed a number of excellent guest speakers including Alison Butterill, Dr Frank Soltis, Trevor Perry and Paul Tuohy. Hope to see you there. More details at on the i-UG website.