Dears,
I have the following 1 hour AWR report (11.0.2.3 -- 24 Cores):
Snap Id | Snap Time | Sessions | Cursors/Session |
Begin Snap: | 22522 | 29-Oct-13 13:00:32 | 63 | - 2.3
|
End Snap: | 22523 | 29-Oct-13 14:00:51 | 67 | - 2.4
|
Elapsed: | | 60.32 (mins) | | |
DB Time: | | 249.68 (mins) | | |
Top 5 Timed Foreground Events
Event | Waits | Time(s) | Avg wait (ms) | % DB time | Wait Class |
db file sequential read | 1,234,292 | 6,736 | 5 | 44.97 | User I/O |
DB CPU | | 5,251 | | 35.05 | |
log file sync | 83,846 | 1,594 | 19 | 10.64 | Commit |
log file switch completion | 1,256 | 372 | 296 | 2.48 | Configuration |
enq: TX - index contention | 19,327 | 310 | 16 | 2.07 | Concurrency |
I know that this application is doing a lot of commits (193 commits per seconds) and I have asked the developer to review his commit frequency. But my question is related to the high number of db file sequential read and enq: TX-index contention. The SQL ordered by Gets (and all other ordered by part of the AWR) shows, at its top, an insert into (in a for all loop save exceptions) as shown below:
INSERT
INTO Table_T
(
COL1_PK , NOT NULL NUMBER(10)
COL2_PK , NOT NULL NUMBER(10)
COL3 , NUMBER(10)
COL4 , NUMBER(10)
COL5 , NOT NULL NUMBER(10)
COL6 , NUMBER(10)
COL7 , NUMBER(10)
COL8 , NUMBER(10)
COL9 , NUMBER(10)
COL10 , NUMBER(10)
COL11 NUMBER(10)
)
VALUES
(
:B11 ,
:B1 ,
:B2 ,
:B3 ,
:B4 ,
:B5 ,
:B6 ,
:B7 ,
:B8 ,
:B9 ,
:B10)
1) There is a simple trigger attached to this table filling up audit field
2) There is no FK on this table
3) The primary key XXX_PK of this table is (COL1_PK, COL2_PK)
The "Segments by" part shows this particular correlated information
Segments by ITL Waits
Owner | Tablespace Name | Object Name | Subobject Name | - Obj. Type
| ITL Waits | % of Capture |
XXX | | XXX_PK | INDEX | 578 | - 75.36
| |
Segments by Buffer Busy Waits
Owner | Tablespace Name | Object Name | Subobject Name | - Obj. Type
| Buffer Busy Waits | % of Capture |
XXX | | TABLE_T | TABLE | 142,235 | - 34.94
|
XXX | | XXX_PK | INDEX | 121,431 | - 29.83
|
And I have the following information about leaf block split
leaf node 90-10 splits | 13,860 | 3.83 | 0.02 |
leaf node splits | 73,153 | 20.21 | 0.09 |
branch node splits | 268 | 0.07 | 0.00 |
The table TABLE_T is inserted into by 5 to 20 concurrent processes. I have been told by the developer that each process has its own COL1_PK. But all process might share the same COL2_PK values (or very close).
My questions are
- Could this huge number of db file sequential read be explained by index contention and leaf block split?
- How could I change the PK index or the underlying table so that the contention (and the logical I/O) on the index could be reduced? Reversing the PK index? Hash Partition the index?
- How to judge if 13,860 lead node 90-10 splits is a high number of not?
Thanks in advance
Mohamed Houri
www.hourim.wordpress.com