BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
5 rows selected.
We have an UPDATE statement that is doing a very large number of 'db file sequential reads' for each execution, even though the plan shows it is a single-row update accessed via a unique index.
From the TKPROF:
SQL ID: 065zmdmsjj7vg Plan Hash: 3820849174
Update PART_ITEM set row_filter_exp=null, row_filter_hash = null
where
PART_ITEM_ID = :1 and row_filter_hash is not null
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 258 13.70 872.11 113473 516000 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 258 13.70 872.11 113473 516000 0 0
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 119
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 113473 0.52 861.15
SQL*Net message to client 258 0.00 0.00
SQL*Net message from client 258 1.51 4.85
So, this is 113473/258=440 db file sequential reads per execution. The ASH report shows that this was almost entirely in the table partition (not the index).
This table has 4 CLOB columns all defined as ENABLE STORAGE IN ROW; the statement sets one of the CLOB columns to null (and 1 other VARCHAR2(128 BYTE) column to null). It is range/interval partitioned with 7 partitions. It has 47 million rows, but none of the LOB partition segments has grown beyond its original default size of 8 MB (so I think there are very few LOBs that are actually located in the LOB segments).
Here is the plan:
SQL_ID 065zmdmsjj7vg, child number 0
-------------------------------------
Update PART_ITEM set row_filter_exp=null, row_filter_hash = null where
PART_ITEM_ID = :1 and row_filter_hash is not null
Plan hash value: 3820849174
--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | | | 3 (100)| | | |
| 1 | UPDATE | PART_ITEM | | | | | | |
|* 2 | TABLE ACCESS BY GLOBAL INDEX ROWID| PART_ITEM | 1 | 102 | 3 (0)| 00:00:01 | ROWID | ROWID |
|* 3 | INDEX UNIQUE SCAN | PK_PART_ITEM | 1 | | 2 (0)| 00:00:01 | | |
--------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("ROW_FILTER_HASH" IS NOT NULL)
3 - access("PART_ITEM_ID"=:1)
Suspecting migrated rows, I ran the Segment Advisor on the table partitions and it reports that 5 of the partitions have over 30% chained rows. However, we do not see any significant amount of 'table fetch continued row' events in our AWR report - which should be a sign that such migrated rows are being encountered and re-assembled:
Statistic Total per Second per Trans
table fetch by rowid 5,162,227 1,431.66 20,648.91
table fetch continued row 594 0.16 2.38
The average row length is about 350 (even with the LOBs) so I think chained rows (i.e. > block size) are also minimal.
Any suggestions on where to look here?
Thanks,
Mike