Informing the IBM Community

Old solutions causing new problems – DSPFD and RGZPFM


A cautionary tale that I’m sure many of you will never have to deal with, but if it stops one person pulling out quite as much hair as I did then it can’t be all bad.

One of the bits of maintenance I can normally get my customers to agree to, even if they’re a bit shy on their PTFs/etc, is running a weekly re-organise of files to get shot of some deleted records and trim the disk usage a tad.

One customer in particular has been doing this the same way for many a year: run a DSPFD type(*MBR) to outfile, loop file and if number of deleted records > threshold (either in % of total records or just plain number of records) then attempt to re-organise it.

Here’s one I prepared earlier, a prime candidate for a cleanup if the number of deleted records is 3x the number of records on the file.

Total number of members  . . . . . . . . . :                 1

Total number of members not available  . . :          0

Total records  . . . . . . . . . . . . . . :                   12560

Total deleted records  . . . . . . . . . . :              36774

Nice and simple and does the job, it also reports on anything it can’t lock / re-org and what was locking it at the time so you can sit back and review at your leisure.

I thought I’d add this into a server I’ve recently taken over management of, no point re-inventing the wheel when I’ve already got the solution in front of me. Except I ended up with far more objects reporting failed re-organisation than I’d expected, especially as they seemed to be logical files which if you’ve ever done a DSPFD of one of your logicals you’ll know it doesn’t show the deleted record count.

For comparison an example of one of the logicals over my file, you’ll see it doesn’t give a record count or number of deleted records so wouldn’t be considered by my weekly job.

Total number of members  . . . . . . . . . :                     1

Total number of members not available  . . :             0

So at first I was very puzzled as to where my problem was, until I realised that just because WRKOBJ shows it as a logical doesn’t mean it came from an LF source member. One of my logicals was actually a SQL view, which apparently does include the number of deleted records.

Again my results from DSPFD, this time focusing on a specific SQL view causing me a headache.

Total number of members  . . . . . . . . . :                 1

Total number of members not available  . . :          0

Total records  . . . . . . . . . . . . . . :                   12560

Total deleted records  . . . . . . . . . . :              36774

So the reason the RGZPFM command was failing was simply because it wasn’t a physical file. When the program was originally written many a moon ago it wasn’t considered that this scenario could crop up.

My neat solution is to amend the DSPFD to only include FILEATR(*PF) which not only solves a lot of my errors but also a precious few KBs of disk space and seconds of processing I’m sure. On my original customer server the DSPFD is used in multiple processes so instead I’m going to add “MBFTYP = ‘P’” to get around it.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.


2 responses to “Old solutions causing new problems – DSPFD and RGZPFM”

  1. RGZPFMs can be something heavy and disturbing activities.
    It looks better to consider the number of deleted records compared to the file size (the physical size, including deleted records) but, in order to save space, with the record length too.
    And in my experience (just order your list of files in this way) also finding and optimal “weight” composed of these dimensional characteristics.
    My heuristic solution was (let me give it to you for free):
    threshold = ((deleted*rec_length)/100*(deleted*rec_length))/(deleted+current)
    For example RGZPFM only when THRESHOLD >= 100,000,000.

    1. David Shears avatar
      David Shears

      Thanks SiBe,

      I use a slightly more basic calculation, so I’ll be interested to compare the two outputs. Normally I’m working on “number of deleted records > x” and “number of deleted records > x% of total records on file”.