The db2 tutorial I am watching is: DB2 optimization (simple version). Preparation—monitors ON
db2 "update monitor switches using
lock ON sort ON bufferpool ON uow ON
table ON statement ON"
Turn on the monitoring switch to obtain the required performance information
The simplest and most effective—Bufferpool
A buffer pool is a storage area in memory that is used to temporarily read and change database pages (including table rows or index items). The purpose of buffer pools is to improve the performance of database systems. Accessing data from memory is much faster than accessing data from disk. Therefore, the less times a database manager needs to read or write to disk, the better the performance. The most important aspect of tuning is that most data (excluding large object and long field data) operations of applications connected to the database are performed in the buffer pool.
By default, the application uses a buffer pool IBMDEFAULTBP, which is created when creating the database. When the NPAGES value of the buffer pool in the directory table is -1, the DB2 database configuration parameter BUFFPAGE controls the size of the buffer pool. Otherwise, the BUFFPAGE parameter will be ignored and the buffer pool will be created with the number of pages specified by the NPAGES parameter.
It is recommended that for applications that only use one buffer pool, change NPAGES to -1, so that BUFFPAGE can control the size of the buffer pool. This makes it easier to update and report buffer pool sizes and other DB2 database configuration parameters.
Make sure you can use the BUFFPAGE parameter in the database configuration to control the buffer pool size, and then set the parameter to an appropriate value. Setting this parameter to a reasonable maximum value is safe depending on the size of the database and the nature of the application. Usually, the default value of this parameter is very small and may not meet the requirements.
db2 "get snapshot for all bufferpools"
In the snapshot output of a database snapshot or buffer pool snapshot, look for the following "logical reads" and "physical reads" to calculate the buffer pool hit rate, which can help tune the buffer pool:
The buffer pool hit rate indicates the percentage of time the database manager does not need to load a page from disk (that is, the page is already in the buffer pool) to process page requests. The higher the hit rate of the buffer pool, the lower the frequency of using disk I/O. Calculate the buffer pool hit rate as follows:
(1 - ((buffer pool data physical reads + buffer pool index physical reads) /
(buffer pool data logical reads + pool index logical reads))
) * 100%
This calculation takes into account all pages (index and data) cached by the buffer pool. Ideally, the ratio should exceed 95%, and be as close as 100% as possible. To increase the buffer pool hit rate, try these methods:
Increase the buffer pool size.
Consider allocating multiple buffer pools, if possible, allocate a buffer pool for the tablespace to which each frequently visited large table belongs, allocate a buffer pool for a set of small tables, and then try using buffer pools of different sizes to see which combinations provide the best performance.
If allocated memory does not help improve performance, avoid allocating too much memory to the buffer pool. The size of the buffer pool should be determined based on the snapshot information taken from the test environment.
A buffer pool that is too small will generate too much, unnecessary physical I/O. Too large buffer pools put the system at risk of operating system page scheduling and consume unnecessary CPU cycles to manage over-allocated memory. The right buffer pool size is at a certain balance point between "too small" and "too big". The appropriate size exists at the point where the return will begin to decrease.
Get the best performance—SQL
A bad SQL statement will completely destroy everything. A relatively simple SQL statement can also mess up a well-tuned database and machine. For many of these statements, there is no DB2 UDB configuration parameter in the world (or in the file) that can correct high costs caused by wrong SQL statements.
Worse, DBAs are often bound by various constraints: SQL cannot be changed (probably because it is provided by the application vendor). This leaves only three ways for the DBA:
1. Change or add index
2. Change the cluster
3. Change directory statistics
Robust applications consist of thousands of different SQL statements. The frequency of these statements is executed varies with the functions of the application and daily business needs. The actual cost of an SQL statement is the cost of its execution once multiplied by the number of times it was executed.
The important task facing each DBA is to identify the challenges of statements with the highest "real cost" and reduce the cost of these statements.
Through the native DB2 Explain utility, tools provided by some third-party vendors, or DB2 UDB SQL Event Monitor data, the resource cost used to execute a SQL statement can be calculated. However, the frequency of statement execution can only be understood by carefully and time-consuming analysis of the data of DB2 UDB SQL Event Monitor.
Optimal performance not only requires excluding high-cost SQL statements, but also requires ensuring that the corresponding physical infrastructure is appropriate. Optimum performance is achieved only when all adjustment knobs are set just right, memory is effectively allocated to the pool and heap, and I/O is evenly allocated to each disk.
Not to be omitted—Lock
These lock-related controls are database configuration parameters:
LOCKLIST indicates the storage capacity allocated to the lock list. Each database has a lock list that contains locks held by all applications that connect to the database concurrently. Locking is a mechanism used by the database manager to control multiple applications to access data in the database concurrently. Rows and tables can be locked. Depending on whether the object holds other locks, each lock requires a list of 32 or 64 bytes:
64 bytes are required to hold a lock on an object, and no other locks are held on this object.
32 bytes are required to record the lock on an object, and a lock is already held on this object.
MAXLOCKS defines the percentage of the lock list held by the application, which must be populated before the database manager performs a lock upgrade. When the percentage of lock list used by an application reaches MAXLOCKS, the database manager upgrades these locks, which means using table locks instead of row locks, thereby reducing the number of locks in the list. When the number of locks held by any application reaches this percentage of the entire lock list size, locks held by the application are upgraded. If the lock list runs out of space, lock upgrades will also occur. The database manager decides which locks to upgrade by viewing the application's lock list and finding the table with the most row locks. If you replace these row locks with a table lock, the MAXLOCKS value will no longer exceed, and the lock upgrade will stop. Otherwise, the lock upgrade will continue until the percentage of the lock list held is lower than MAXLOCKS. The MAXLOCKS parameter multiplied by the MAXAPPLS parameter cannot be less than 100.
Although the upgrade process itself does not take a lot of time, locking the entire table (relative to locking individual rows) reduces concurrency, and the overall performance of the database may be reduced due to subsequent access to tables affected by locked upgrades.
The default value of LOCKTIMEOUT is -1, which means there will be no lock timeout (for OLTP applications, this situation can be disastrous). Many DB2 users use LOCKTIMEOUT = -1. Set LOCKTIMEOUT to a very short time value, such as 10 or 15 seconds. Waiting on the lock for too long will create an avalanche effect on the lock.
First, use the following command to check the value of LOCKTIMEOUT:
db2 "get db cfg for DBNAME"
And look for lines containing the following text:
Lock timeout (sec) (LOCKTIMEOUT) = -1
If the value is -1, consider changing it to  
db2 "update db cfg for DBNAME using LOCKTIMEOUT 15"
At the same time, the number of lock waiting, the time of lock waiting and the amount of lock list memory being used should be monitored. Please issue the following command:
db2 "get snapshot for database on DBNAME"
If Lock list memory in use (Bytes) exceeds 50% of the defined LOCKLIST size, then add the number of 4k pages to the LOCKLIST database configuration.
db2 "update monitor switches using
lock ON sort ON bufferpool ON uow ON
table ON statement ON"
Turn on the monitoring switch to obtain the required performance information
The simplest and most effective—Bufferpool
A buffer pool is a storage area in memory that is used to temporarily read and change database pages (including table rows or index items). The purpose of buffer pools is to improve the performance of database systems. Accessing data from memory is much faster than accessing data from disk. Therefore, the less times a database manager needs to read or write to disk, the better the performance. The most important aspect of tuning is that most data (excluding large object and long field data) operations of applications connected to the database are performed in the buffer pool.
By default, the application uses a buffer pool IBMDEFAULTBP, which is created when creating the database. When the NPAGES value of the buffer pool in the directory table is -1, the DB2 database configuration parameter BUFFPAGE controls the size of the buffer pool. Otherwise, the BUFFPAGE parameter will be ignored and the buffer pool will be created with the number of pages specified by the NPAGES parameter.
It is recommended that for applications that only use one buffer pool, change NPAGES to -1, so that BUFFPAGE can control the size of the buffer pool. This makes it easier to update and report buffer pool sizes and other DB2 database configuration parameters.
Make sure you can use the BUFFPAGE parameter in the database configuration to control the buffer pool size, and then set the parameter to an appropriate value. Setting this parameter to a reasonable maximum value is safe depending on the size of the database and the nature of the application. Usually, the default value of this parameter is very small and may not meet the requirements.
db2 "get snapshot for all bufferpools"
In the snapshot output of a database snapshot or buffer pool snapshot, look for the following "logical reads" and "physical reads" to calculate the buffer pool hit rate, which can help tune the buffer pool:
The buffer pool hit rate indicates the percentage of time the database manager does not need to load a page from disk (that is, the page is already in the buffer pool) to process page requests. The higher the hit rate of the buffer pool, the lower the frequency of using disk I/O. Calculate the buffer pool hit rate as follows:
(1 - ((buffer pool data physical reads + buffer pool index physical reads) /
(buffer pool data logical reads + pool index logical reads))
) * 100%
This calculation takes into account all pages (index and data) cached by the buffer pool. Ideally, the ratio should exceed 95%, and be as close as 100% as possible. To increase the buffer pool hit rate, try these methods:
Increase the buffer pool size.
Consider allocating multiple buffer pools, if possible, allocate a buffer pool for the tablespace to which each frequently visited large table belongs, allocate a buffer pool for a set of small tables, and then try using buffer pools of different sizes to see which combinations provide the best performance.
If allocated memory does not help improve performance, avoid allocating too much memory to the buffer pool. The size of the buffer pool should be determined based on the snapshot information taken from the test environment.
A buffer pool that is too small will generate too much, unnecessary physical I/O. Too large buffer pools put the system at risk of operating system page scheduling and consume unnecessary CPU cycles to manage over-allocated memory. The right buffer pool size is at a certain balance point between "too small" and "too big". The appropriate size exists at the point where the return will begin to decrease.
Get the best performance—SQL
A bad SQL statement will completely destroy everything. A relatively simple SQL statement can also mess up a well-tuned database and machine. For many of these statements, there is no DB2 UDB configuration parameter in the world (or in the file) that can correct high costs caused by wrong SQL statements.
Worse, DBAs are often bound by various constraints: SQL cannot be changed (probably because it is provided by the application vendor). This leaves only three ways for the DBA:
1. Change or add index
2. Change the cluster
3. Change directory statistics
Robust applications consist of thousands of different SQL statements. The frequency of these statements is executed varies with the functions of the application and daily business needs. The actual cost of an SQL statement is the cost of its execution once multiplied by the number of times it was executed.
The important task facing each DBA is to identify the challenges of statements with the highest "real cost" and reduce the cost of these statements.
Through the native DB2 Explain utility, tools provided by some third-party vendors, or DB2 UDB SQL Event Monitor data, the resource cost used to execute a SQL statement can be calculated. However, the frequency of statement execution can only be understood by carefully and time-consuming analysis of the data of DB2 UDB SQL Event Monitor.
Optimal performance not only requires excluding high-cost SQL statements, but also requires ensuring that the corresponding physical infrastructure is appropriate. Optimum performance is achieved only when all adjustment knobs are set just right, memory is effectively allocated to the pool and heap, and I/O is evenly allocated to each disk.
Not to be omitted—Lock
These lock-related controls are database configuration parameters:
LOCKLIST indicates the storage capacity allocated to the lock list. Each database has a lock list that contains locks held by all applications that connect to the database concurrently. Locking is a mechanism used by the database manager to control multiple applications to access data in the database concurrently. Rows and tables can be locked. Depending on whether the object holds other locks, each lock requires a list of 32 or 64 bytes:
64 bytes are required to hold a lock on an object, and no other locks are held on this object.
32 bytes are required to record the lock on an object, and a lock is already held on this object.
MAXLOCKS defines the percentage of the lock list held by the application, which must be populated before the database manager performs a lock upgrade. When the percentage of lock list used by an application reaches MAXLOCKS, the database manager upgrades these locks, which means using table locks instead of row locks, thereby reducing the number of locks in the list. When the number of locks held by any application reaches this percentage of the entire lock list size, locks held by the application are upgraded. If the lock list runs out of space, lock upgrades will also occur. The database manager decides which locks to upgrade by viewing the application's lock list and finding the table with the most row locks. If you replace these row locks with a table lock, the MAXLOCKS value will no longer exceed, and the lock upgrade will stop. Otherwise, the lock upgrade will continue until the percentage of the lock list held is lower than MAXLOCKS. The MAXLOCKS parameter multiplied by the MAXAPPLS parameter cannot be less than 100.
Although the upgrade process itself does not take a lot of time, locking the entire table (relative to locking individual rows) reduces concurrency, and the overall performance of the database may be reduced due to subsequent access to tables affected by locked upgrades.
The default value of LOCKTIMEOUT is -1, which means there will be no lock timeout (for OLTP applications, this situation can be disastrous). Many DB2 users use LOCKTIMEOUT = -1. Set LOCKTIMEOUT to a very short time value, such as 10 or 15 seconds. Waiting on the lock for too long will create an avalanche effect on the lock.
First, use the following command to check the value of LOCKTIMEOUT:
db2 "get db cfg for DBNAME"
And look for lines containing the following text:
Lock timeout (sec) (LOCKTIMEOUT) = -1
If the value is -1, consider changing it to  
[1] [2] Next page
The db2 tutorial I am watching is: DB2 optimization (simple version). ;15 seconds (be sure to first ask the application developer or vendor to make sure the application can handle lock timeout):db2 "update db cfg for DBNAME using LOCKTIMEOUT 15"
At the same time, the number of lock waiting, the time of lock waiting and the amount of lock list memory being used should be monitored. Please issue the following command:
db2 "get snapshot for database on DBNAME"
If Lock list memory in use (Bytes) exceeds 50% of the defined LOCKLIST size, then add the number of 4k pages to the LOCKLIST database configuration.
This news has 2 pages in total, currently on page 1 1 2
Previous page [1] [2]