It’s hard to believe it but 2014, the year that brought us v7.2 and Power8, is almost over. Indeed, it may well be gone by the time you read this. So, without further ado, here is part two of my list of things that I love about version 7.2 of our favourite operating system.
Number 4 – Performance Monitoring
Performance Tools are nothing new to IBM i. For as long as I can recall we have had tooling from IBM and third-parties that will tell us about how our system is performing, but what is so different about the latest implementation is just how easy it is to use.
Based on the inbuilt Director Navigator for i browser interface, there is no special software for you to load. Just point your browser of choice (I suggest you use Firefox for the best experience) to https://YourServerIP:2001, login with your normal IBM i user name, select Performance and off you go. By default, your system will have been collecting performance data for you so you can hit the ground running.
If you are new to this tool, I would suggest you start with the Health Indicators. These simple traffic-light interfaces quickly show you any issues you might have on your system. They then allow you to drill down into the detail to find the issue.
Better still; most of these new Performance Tools features that IBM created for V7.2 have been PTF’d back to v7.1. And, if that was not enough, you can analyse data from one system on another, even taking data from a v6.1 system and analysing it on a v7.1 or v7.2 system.
In the screenshot below, the Health Indicator shows that the system is suffering from a disk issue.
You can then use the Select Action function to drill down to see exactly what type of issue and which job is causing it. For more details of this, check out my article IBM i 7.2: Finding the job that is killing your system.
Number 3 – Batch Model
For as long as there have been servers, people have been asking how much faster their system would be if they were to add RAM, disk or switch to the latest model. Finding these answers is often seen as a bit of a dark art, often based on personal opinion or the guidelines of a single business partner or ISV.
What IBM has done is phenomenally clever and useful. It takes the detailed performance information it collects and allows you to model it through a series of “what if” questions and immediately see the difference they would make at both a system and a job level.
You can change the Batch Model to show the difference the following changes would make:
• adding another processor
• adding more HDD disk arms
• adding SSDs
• upgrading to a different model or generation of server
• increasing your current workload by a given percentage
• moving a workload from one timeslot to another.
You can compare graphs or tables of raw data of both the before and after to see whether it really would be worth spending your time and cash on that upgrade.
In the example below, we look at the improvement that adding another 13 disks would make to an existing client’s workload. The first graph shows a summary of the current situation, called the Measured Resource. The green line is the average disk percentage busy and the purple line the CPU.
In this next graph, we see the effect of the disk upgrade on that same workload. This is called the Modelled Resource. In this case we see the disk utilisation halves, allowing many of the long-running I/O-bound jobs to finish in half the time.
If you would like know more about Batch Modelling, please click here to read my October article IBM i 7.2: How to look into the future with Batch Model.
Number 2 – RCAC – Row and Column Access Control
Until v7.2, we only really had all or nothing when it came to read data access, i.e. if a user had authority to read a file (table), then they had the authority to read every record (row) in that file. This is something we have all just gotten used to and it is only now when we stop to think about IBM’s solution to this that we realise just how exposed it has left us.
We have been raised on object-based security and we have always expected this to be the be-all and end-all of data security. If anything, it has been our applications that have restricted which subset of data we have been able to see.
This is, of course, still an excellent foundation for security and is good in as far as it goes but one of the main problems is that these days we all connect to our systems using “smart” workstations, most of which have a wealth of data-mining tools like ODBC built-in. This means that with just a little bit of knowledge of the DSPJOB command you can work out which files you have open behind any application screen you might be looking at.
Incidentally, the DSPJOB command is one of the few commands that you are still permitted to use even when your user has it command-line-disabled. What is more, you can evoke it from within virtually any screen of any application.
Tip: if there is some data you want access to export, go to the program that you would normally use to view that data, then take the SysRq 3 option (usually shift + esc, then type 3 and enter). From this DSPJOB menu take option 14 to display the open files. Now you know exactly which files in which libraries to start looking in.
In short, object- based security by itself is no longer good enough and this is where Row and Column Access Control (RCAC) comes into play. What it allows you to do is decide which user or users get to see which records and fields (rows and columns) in any given file or files.
This means you can even limit the ability of your system administrators to look at data while still being able to allocate the appropriate security access to their users. So, for example, that troublesome payroll file that you have always worried about protecting from prying eyes can now be secured with ease. With RCAC even security officers could be restricted from viewing the contents of such files while still being able to perform all necessary admin functions upon them.
Number 1 – Query/400 and OpnQryF use SQE
From v7.2 onward, Query/400 and Open Query File (OpnQryF) queries can be processed by DB2 for i’s SQE database engine rather than the old-school CQE. IBM has called this a happy coincidence and it was certainly not on any roadmap. But whatever the reason, this means that upgrading to v7.2 should make your Query/400 and OpnQryF queries faster. And when I say faster, I mean a lot faster!
Tests that I have conducted have shown a 17x improvement in performance just by changing to V7.2.
The chart above shows the execution time for a Query/400 query to execute on the exact same system, with the exact same configuration, with only the OS-level changing. This reduced the first execution time of this rather horrid query from around 420 seconds to 25! If you would like to know more about what and how it was I tested, then please check out my article IBM i 7.2 delivers 17x performance gain with zero change.
So much more to say…
I think it’s fair to say that there is so much more packed into v7.2. For example, with IBM’s recent announcement of V7.2 TR1, one of the restrictions that was stopping some folks from upgrading to V7.2 – the 70GB Load Source requirement – has been revised. Now, depending on how you present your disk, the load-source can be as small as 35GB.
If you have something in particular you’d like me to discuss, you can contact me via the feedback below or through my website. In the meantime, I hope to see British readers at the next i-UG event on February 19. Keep an eye on the i-UG website for the forthcoming details.
Steve Bradshaw is the founder and managing director of Wolverhampton, UK-based Power Systems specialist Rowton IT Solutions and technical director of British IBM i user group i-UG. He has been a key contributor to PowerWire since 2012 and he also sits on the Common Europe Advisory Council (CEAC) which helps IBM shape the future of IBM i.