DB2 DPF SQL Workload Test

Nothing can degrade the performance of DB2 partition like a resource-hungry or a long-running query! When such queries execute on a logical partition, they either hog almost all the available CPU, memory, and disk resources or keep the resources locked for long time periods, thus leaving little to no resources for carrying out other critical database operations. This can significantly slowdown the partition and adversely impact user experience with the partition. To ensure peak performance of the logical partitons at all times, such queries should be rapidly identified and quickly optimized to minimize resource usage. This is where the DB2 DPF SQL Workload test helps.

At configured intervals, this test compares the usage levels and execution times of all queries that started running on the logical partitions  in the last measurement period and identifies a ‘top query’ in each of the following categories - CPU usage, memory usage, disk activity, and execution time. The test then reports the resource usage and execution time of the top queries and promptly alerts administrators if any query consumes more resources or takes more time to execute than it should. In such a scenario, administrators can use the detailed diagnosis of this test to view the inefficient queries and proceed to optimize them to enhance server performance. 

Target of the test : A DB2 DPF server

Agent deploying the test : An internal agent

Outputs of the test : One set of results for every logical partition of each database on the DB2 database server that is currently active

Configurable parameters for the test
  1. TEST PERIOD – How often should the test be executed
  2. HostThe IP address of the DB2 server
  3. PortThe port number through which the DB2 server communicates. The default port is 50000.
  4. user - Specify the name of the user who has any of the following privileges to the specified DATABASE: SYSADM or SYSCTRL or SYSMAINT or SYSMON. You can create a separate user on the OS hosting the DB2 server for this purpose, and assign any of the aforesaid privileges to that user. The steps for the same are detailed in the Creating a Special User for Monitoring DB2.
  5. password - Enter the password of the specified USER in the PASSWORD text box.
  6. confirm password – Confirm the password by retyping it here.
  7. database - Specify the name of the database on the monitored DB2 server to be used by this test.
  8. SSL- If the target database server is SSL-enabled, then set the SSL flag to Yes. If not, then set the SSL flag to No.
  9. Trust Store file name- The trust store file contains certificates from trusted Certificate Authorities (CAs). These certificates are used by eG agent to verify the authenticity of servers hosting DB2 UDB and establish a secure connection with the server using SSL. Specify the filename for Trust store file in Trust Store file name text box.
  10. Trust Store Password- The trust store password is the passphrase or key used to encrypt and decrypt the trust store file. This password is required by the eG Agent when it needs to access the trust store file to establish secure connections. Specify the password in Trust Store password text box.
  11. DETAILED DIAGNOSIS - To make diagnosis more efficient and accurate, the   eG Enterprise embeds an optional detailed diagnostic capability. With this capability, the eG agents can be configured to run detailed, more elaborate tests as and when specific problems are detected. To enable the detailed diagnosis capability of this test for a particular server, choose the On option. To disable the capability, click on the Off option.

    The option to selectively enable/disable the detailed diagnosis capability will be available only if the following conditions are fulfilled:

    • The eG manager license should allow the detailed diagnosis capability
    • Both the normal and abnormal frequencies configured for the detailed diagnosis measures should not be 0.
Measurements made by the test
Measurement Description Measurement Unit Interpretation

Maximum physical read rate

Indicates the number of physical disk reads performed by the top query per execution.

Seconds/read

If the value of this measure is abnormally high, you can use the detailed diagnosis of this measure to view the top-5 (by default) queries generating maximum physical disk activity. From this, you can identify the top query in terms of number of physical disk reads. You may then want to optimize the query to reduce the disk reads.

Maximum physical write rate

Indicates the number of memory buffers used by the top query per execution.

Seconds/write

If the value of this measure is abnormally high, you can use the detailed diagnosis of this measure to view the top-5 (by default) queries consuming memory excessively. From this, you can easily pick that query which is consuming the maximum memory. You may then want to optimize the query to minimize memory usage.

 

Maximum user CPU time

Indicates the CPU time used for user level processing upon execution of the top query.

Seconds

If the value of this measure is over 30 seconds, you can use the detailed diagnosis of this measure to the top-5 (by default) queries hogging the CPU resources. From this, you can easily pick that query which is consuming the maximum CPU. You may then want to optimize the query to minimize CPU usage.

Maximum elapsed time

Indicates the running time of each execution of the top query.

Seconds

If the value of this measure crosses 10 seconds, you can use the detailed diagnosis of this measure to view the top-5 (by default) queries that are taking too long to execute. . From this, you can easily pick that query with the maximum execution time. You may then want to optimize the query to minimize execution time.

Maximum system CPU time

Indicates the CPU time used for system level processing upon execution of the top query.

Seconds

If the value of this measure is over 30 seconds, you can use the detailed diagnosis of this measure to the top-5 (by default) users consuming the maximum system CPU time. From this, you can easily pick that query which is consuming the maximum CPU. You may then want to optimize the query to minimize CPU usage.