1. Is SmartProduction
CPU sensitive?
Yes. Your Product
Code contains the information indicating your license mode (Trial
or Licensed), the authorized CPU and model, and the expiration date
of your authorization.
back
to FAQ
2.
What do I need to do to run SmartProduction on more than one processor?
You need to license the new processor and get a
new product code string from us. Then, you add a new $CODESP= line
to your Global parameter member for the new code string. You can
have as many $CODESP lines as you need to authorize all licensed
copies.
back
to FAQ
3.
What do I do if I change processors?
You need to let us know of the change. If the size
of your system is changing, there may be an upgrade fee. When it
is resolved, you will get a new product code string from us. Then,
you replace the $CODESP= line to your Global parameter member with
the new code string.
back
to FAQ
4.
How do I configure SmartProduction to recognize duplicate sorts?
This analysis is only available to shops running
SYNCSORT and only if the SYNCSORT SMF records are being collected.
The Global Parameter member must have $SORTPROD=SYNCSORT, and the
$SORTSMF= value set to the SMF Record Type assigned to SYNCSORT.
back
to FAQ
5.
How do I process the ".BIN" files from the diskette or
e-mail?
Examine the README file and find the JCL to do a
TSO RECEIVE. The .BIN files are created using the TSO TRANSMIT command
and are a form of IEBCOPY unloaded data. The RECEIVE command, which
can be run as part of a batch job, performs the reload.
back
to FAQ
6.
When I run the Good Candidates report, I don't get any output. Is this because of
insufficient data?
The report shows that only 5 or 6 records were selected.
The input tape has many more records for the date range. Be sure
that you select the same date range of raw SMF data as was selected
in the job history selection/extraction program.
back
to FAQ
7.
Why did I get an error running the RECEIVE as shown in the READ.ME?
On some systems, the specified parameters for the
data set cause this problem. Rerun the command, but delete the data
set allocation parameters other than disposition and volume. The
RECEIVE should work by taking the data set attributes from the input
.BIN files.
back
to FAQ
8.
What is the PRINTOFF command shown on the various browse panels?
This is a command to print the contents of the display. When you
enter the command on the command line, a small window will pop up
to allow you to specify the JES output class and the number of copies.
back
to FAQ
9.
What is the consequence of not capturing the Syncsort SMF records?
No Syncsort specific cases (i.e., duplicate sorts,
sorting sorted data), will be reported.
back
to FAQ
10.
How do I use the Trend Analysis Report?
Use the HISTSAVE job stream to preserve the current
JOBHIST cluster. Rebuild the JOBHIST cluster with new data. Run
the Trend Analysis with the saved JOBHIST as the "old"
data and then rebuild JOBHIST as the "new" data.
back
to FAQ
11.
Why do I receive message TMR246I Function READU R15=9 FDBK=0014?
This may occur when running the Good Job Candidates
report, but it does not reoccur on a rerun. If you use a 3rd party
LSR product (e.g., I/O Plus for VSAM from Softworks) you may have
this problem because it incorrectly identifies the type of access
being performed and it dynamically puts the JOBHIST in batch LSR.
Exclude the JOBHIST cluster from batch LSR processing.
back
to FAQ
12.
Where do the dates in the summary section of the HISTFILL job come
from?
They come from the SMFxxDTE field in the header
of all SMF records. If a third party product writes SMF records
and is run under date simulation, the simulated date may appear
in the SMF record and this shows up in the HISTFILL summary.
back
to FAQ
13.
Why do I get an ISPF error submitting SmartProduction JCL?
An ISPF error may occur if the obsolete ISPCNTLx
DD statement is present in your LOGON Procedure and EDIT=YES is
specified on your SmartProduction Customization panel. Removal of
DD statements is the best choice, since they are obsolete and have
been replaced with dynamic allocation. If you cannot do this, then
change EDIT to NO on the customization panel.
back
to FAQ
14.
Can I add data to the JOBHIST file?
The correct method for adding data to the JOBHIST file is to rebuild
it from a new collection of SMF data. You can refresh the JOBHIST
file with new data by reloading it from a new collection of SMF
data. You cannot simply add data to the file, because the summarization
that occurs during the file loading will not be correct.
back
to FAQ
15.
How do I upgrade SmartProduction from an earlier level?
You should have received a copy of the upgrade instructions
document with your new software. If you have not, please contact
technical support for a copy of the upgrade instructions.
back
to FAQ
16.
Is SmartProduction compatible with...
SmartProduction is compatible with all MVS and OS/390
releases from MVS 5.2.2 up to and including z/OS 1.4. Since SmartProduction
uses SMF records for its input, it will be compatible with all future
releases of zOS as well.
back
to FAQ
17.
How often are SmartProduction's various job and data set cases reviewed
for applicability with today's newer technologies?
Specifically, SmartProduction cases address the
following:
a)
Traditional tuning methods/issues/solutions
"Traditional" tuning methods/issues/solutions (e.g., blocksizes,
VSAM buffers, operational delays) are usually well-known and relevant
in all data centers.
SmartProduction exposes the size and impact of such known performance
issues, which might not otherwise be fully recognized. When applicable,
SmartProduction also provides advanced solutions for these issues
(see below).
Although these issues may seem elementary, they will provide much
savings. Our experience shows that in most data centers, millions
of I/O operations can be eliminated each day by tuning blocksizes
and VSAM buffers of only a few data sets!
b)
Advanced tuning methods/issues/solutions
Various SmartProduction cases address new features and options that
have recently been added to various systems and products. Examples
include:
- General cases suggesting the usage of data-in-memory techniques
(e.g., Hiperbatch)
- Cases suggesting the usage of specific DFSMS/DFP and SMS options
- Sort-related cases suggesting the usage of dataspace sorting and
hipersorting
- For SyncSort: case suggesting the usage of the new PARASORT facility
- For DB2: cases suggesting the usage of new DSNUTILB performance-related
options
c)
Cookbook-type suggestions
Various SmartProduction cases provide suggestions from the field.
Sources include official IBM cookbooks and documents, as well as
production managers (whose information has been verified) Examples
include:
- Combining data fetching from a data set, which is currently performed
by multiple jobs or job steps
- Switching from (relatively) slow utilities to faster utilities
- Specific application inefficiencies
back
to FAQ
18.
Is disk I/O tuning obsolete?
Today's
disks devices are much more sophisticated than the 3350/3380/3390
devices. New devices employ internal cache and buffers, internally
perform asynchronous I/O, etc. Therefore, many old disk I/O tuning
considerations (e.g., disk head movement consideration; placement
of data sets relative to the VTOC) are now obsolete.
However, optimizing I/O still provides significant enhancements
with regard to overall CPU and execution time. Optimization eliminates
unnecessary activities and delays both before and after "the
I/O request is handled by the sophisticated hardware device."
When an application issues a read/write request, some operating
system routines (e.g., access method routine and input/output supervisor)
receive control. These routines perform a large number of tasks
such as I/O operation preparation, start I/O initiation, buffer
pool management, and I/O interrupt handling. The operation of the
MVS routines entails significant CPU costs. By reducing the number
of I/O operations, CPU resources can be saved - and execution time
is reduced.
back
to FAQ
19.
Given new disk architectures, what are blocksize considerations?
SmartProduction
has adopted IBM's SDB ("system-determined block size")
standard when determining optimal data set block size.
Today, SDB is clearly an "industry standard." It is supported
and promoted by several operating system components, by IBM products
(such as DFHSMS and DFSort), and by non-IBM performance-oriented
products (such as SyncSort and CA-Sort).
The optimal block size for given device, data set or application
is derived from several technical considerations. Following are
some important considerations that apply to all kinds of disk devices
and user applications.
-
Improved I/O Performance
A larger block size reduces the number of actual I/O operations.
As a result, CPU cycles are saved, and the application enters a
wait state due to I/O activity less often.
Parameters used for calculating the largest possible block size
are: track capacity, maximum block size supported by the access
method used, LRECL, REFCM.
- Optimal Disk Space Usage
For a given disk type and LRECL, a certain block size provides optimal
disk space usage.
Parameters used for calculating this optimal block size are: track
capacity, maximum block size supported by the access method used,
LRECL, REFCM, number of gaps generated.
- Increased Virtual Storage Usage
A larger block size increases the usage of virtual storage, and
therefore increases the usage of real storage. With today's huge
storage configurations, this usually does not cause problems.
Note: Blocks are usually buffered (for example: QSAM uses
five buffers by default); consequentially, the additional storage
usage also depends on the number of buffers used.
In general, the larger the block size, the greater
the number of CPU cycles that can be saved, and the less the application
enters a wait state due to I/O delays. However, additional considerations
and limitations make use of the largest possible block size impractical
and/or non-optimal.
As a general rule, the block size derived by SDB (for 3380/3390
or equivalent: close to half-track capacity) reflects the various
considerations and limitations, and provides an overall optimal
value.
back
to FAQ
20.
What
allowances does SmartProduction make for Virtual Tape environments?
SmartProduction
includes various cases for cartridge and tape devices, including
Virtual Tape Systems (VTS). For example, the following issues are
addressed for VTS:
- Over-allocation of drives: Each VTS has a fixed number
of virtual tape drives. During peak hours, jobs requiring a tape
drive may be delayed when there is no free tape drive available.
Therefore, it is important to ensure that each job using the VTS
allocates only the number of tape drives required.
- Operational delays: For example, when two jobs request
the same tape volume serial number, the operating system will delay
one of them.
- Inefficient I/O: slow access mode, non-optimal blocksizes.
back
to FAQ
21.
For
specific scheduling environments, can SmartProduction select and/or
exclude specific production jobs for analysis?
SmartProduction
provides several options for selecting and analyzing only "relevant"
jobs. For example:
a) The JOBHIST database can be populated with specific data only.
The Extractor can be ordered to filter out specific SMF records
(via the INCLSYS statement and the Exit 3 routine).
If required, Axios will provide an Exit 3 routine to filter out
SMF data by userid (or any other relevant criterion available in
the SMF record header).
b) Reports can be ordered to select only specific jobs (by job name
or application name; some reports can also select jobs by system-id,
class, task type, CPU time, and run time).
In general, we suggest that you not globally exclude jobs
submitted from TSO, as some may access production data sets.
Note: All job-related reports allow you to specify an application
name, allowing the analysis of a group of jobs relevant to your
shop.
back
to FAQ
22.
How
often is SmartProduction updated?
A
new release is provided every 6-8 months.
When required, fixes are provided between releases, and can be found
in the Fixes section of SmartProduction support.
back
to FAQ
23.
How
much does the Job History file increase in size from one release
to the next?
Information
was added to the JOBHIST in both V3.6.0 and V4.1.0. Some internal
JOBHIST records have been extended (e.g. types 14, 15, 30, 64, Sort),
while others have not been changed (e.g. types 77, 101).
Each and every site has a unique mix of record types. However, in
most sites, types 14 and 15 represent over 70 percent of all records,
allowing us to establish a good starting point.
Overall, when upgrading from V3.5.0 to V3.6.0, we estimate a 4 percent
increase.
Overall, when upgrading from V3.6.0 to V4.1.0, we estimate 4 percent
increase.
On the other hand, the Extractor is enhanced with each new release
to disregard certain vendor-related information in SMF, and not
record it in the JOBHIST. This is information that has been determined
to be irrelevant in tuning efforts.
In V3.6.0, the Extractor skips additional data for the BMC Software
Inc. IOA®/CONTROL® products.
In V4.1.0, the Extractor skips additional data for the BMC Software
Inc. IOA®/CONTROL® products and for the CA-Dispatch
product.
If you use any of these vendor products, there may even be a decrease
in the JOBHIST size.
back
to FAQ
24. What is the
$RECPGM parameter and why is it important?
This parameter is important for cases J240, J241
and J242. For a more detailed explanation, see the White
Paper on the Download Page.
back
to FAQ
25.
Can I use IAM for my Job History File?
Yes, you can. See the White
Paper for instructions.
back
to FAQ
26.
Why do I get message IKJ56500I COMMAND TMRMxxxx NOT FOUND?
In Version 4.4.0, three SmartProduction programs were converted to function as TSO Commands. In an ACF2 security environment with the Command Limiting Function active, these programs must be identified to ACF2. The three programs are TMRMDSAN, TMRMJBAN, and TMRMSTAT.
back
to FAQ
home
| products | about
Axios | support | contact
us
|
|