Hi, We are using version 12.1.2.0 of Oracle Exadata. We have an UPDATE query which is operated through using a pipelined table function(with parallel_enable) and was working without any issue and was finishing in ~15minutes. But now we are seeing as the volume is increased in recent days the UPDATE is running ~45mins to 1hrs. I am not very much aware about the usage table function, but few things i am seeing inside the table function, 1)we are using bulk collect limit -100 and the COMMIT is used outside the loop i.e. it must be committing all ~40million rows at one shot.
So in dev i was testing for update of ~1.5million rows and , i tried tweaking the bulkcollect limit to 500-1000 and also trying to put the commit within the loop but not seeing any difference in performance. Also i tried by just doing UPDATE in simple fashion by removing the call to table function, and in that case the query runs too long(more than the current execution time using table function). So definitely the table function is helping the query currently. So i have below questions
1)It seems that currently the table function is helping the UPDATE to run faster as compared to normal UPDATE query, But is there a way, i can further push it to have more throughput and better overall response time?
2)I do see in the SELECT query we have PARALLEL(10) hint used ,does it mean that the table function will be called in ~10 parallel sessions/threads i.e. each will process/update ~40/10=~4million rows?
3)We do see many time when we perform index range scan on same table as part of some other SELECT query, after finishing this large UPDATE , the SELECT query went for "delayed block cleanout" Or "transaction tables consistent reads - undo records applied" and fails with Ora-01555. The index whose read is suffering most is a function based index IX1 with definition as below. So is it happening because we are committing ~40million in one shot so leaving behind many index blocks to be cleaned up by the upcoming SELECT query, and if will commit inside the LOOP then it will help minimizing this effect?
"DECODE(TO_CHAR("FXID"),NULL,1,NULL), DECODE(TO_CHAR("FXID"),NULL,"ETYP",NULL), DECODE(TO_CHAR("FXID"),NULL,"EID",NULL), DECODE(TO_CHAR("FXID"),NULL,"CTGID",NULL)"
Below is the sample query and code for the Tablefunction and the sql monitor of the main query from production:- And yes, although the SELECT query is having some hints embeded and also row source statistics doesn't show good estimation match, but still that SELECT query alone is finishing in few minutes so main bottleneck is the UPDATE which i am trying to fix.
CREATE OR REPLACE FUNCTION USER1.fun1( MYCUR IN SYS_REFCURSOR,input_type in varchar2)
RETURN num_array
PARALLEL_ENABLE (PARTITION MYCUR BY ANY)
PIPELINED IS PRAGMA AUTONOMOUS_TRANSACTION;
.....
BEGIN
LOOP
FETCH MYCUR BULK COLLECT INTO collection_tab1, collection_tab2, collection_tab3, collection_tab4 LIMIT 100;
EXIT WHEN collection_tab1.COUNT() = 0;
FORALL i IN collection_tab1.FIRST .. collection_tab1.LAST
UPDATE TABLE1
set date_modify = sysdate,
modify_by = 'XXXXX',
FXID = collection_tab2(i)
WHERE Instance_ID = collection_tab3(i)
and Partition_date = collection_tab4(i)
and TABLE1_C4 = collection_tab1(i) ;
vn_count := vn_count + collection_tab1.COUNT;
END LOOP;
CLOSE MYCUR;
COMMIT;
PIPE ROW(vn_count);
RETURN;
END fun1 ;
/
SELECT SUM (COLUMN_VALUE)
FROM TABLE (
USER1.fun1 (
CURSOR (SELECT /*+ ordered full(TABLE2) parallel(TABLE2,10) use_nl(TABLE3) index(TABLE3 TABLE3_IX3) */
TABLE2.feid,TABLE2.FXID,TABLE2.Instance_ID,TABLE2.Partition_date
FROM TABLE2 ,TABLE3
WHERE table2_c1 = 'F'
AND TABLE3.head_id=xxxxxxxx
AND TABLE3.head_id =TABLE2.head_id
AND TABLE3.FXID = TABLE2.FXID
AND TABLE3.gr_id = TABLE2.gr_id
AND TABLE3.FX_Instance_ID = TABLE2.FX_Instance_ID
AND TABLE2.table2_hesdstat = 'XXX'
AND TABLE3.STAT = 'XXX'
),
'F')) ;
Global Information
------------------------------
Status : DONE (ALL ROWS)
Instance ID : 1
SQL Execution ID : 16780874
Execution Started : 06/01/2019 12:26:43
First Refresh Time : 06/01/2019 12:26:43
Last Refresh Time : 06/01/2019 13:08:35
Duration : 2512s
Module/Action : JDBC Thin Client/-
Program : JDBC Thin Client
PLSQL Entry Ids (Object/Subprogram) : 99118,6
PLSQL Current Ids (Object/Subprogram) : 99118,6
Fetch Calls : 1
Binds
========================================================================================================================
| Name | Position | Type | Value |
========================================================================================================================
| :B3 | 1 | NUMBER | xxxxxx |
| :B2 | 2 | NUMBER | 4 |
| :B1 | 3 | NUMBER | 11 |
========================================================================================================================
Global Stats
===========================================================================================================================
| Elapsed | Cpu | IO | Application | Concurrency | Cluster | PL/SQL | Fetch | Buffer | Read | Read | Cell |
| Time(s) | Time(s) | Waits(s) | Waits(s) | Waits(s) | Waits(s) | Time(s) | Calls | Gets | Reqs | Bytes | Offload |
===========================================================================================================================
| 25152 | 11044 | 13850 | 0.00 | 258 | 0.27 | 13 | 1 | 733M | 13M | 107GB | 2.91% |
===========================================================================================================================
Parallel Execution Details (DOP=10 , Servers Allocated=10)
====================================================================================================================================================================
| Name | Type | Server# | Elapsed | Cpu | IO | Application | Concurrency | Cluster | PL/SQL | Buffer | Read | Read | Cell | Wait Events |
| | | | Time(s) | Time(s) | Waits(s) | Waits(s) | Waits(s) | Waits(s) | Time(s) | Gets | Reqs | Bytes | Offload | (sample #) |
====================================================================================================================================================================
| PX Coordinator | QC | | 0.13 | 0.13 | 0.00 | 0.00 | | 0.00 | | 27 | 3 | 81920 | NaN% | |
| p000 | Set 1 | 1 | 2496 | 1112 | 1358 | | 27 | 0.04 | 1.30 | 74M | 1M | 10GB | 1.96% | |
| p001 | Set 1 | 2 | 2605 | 1110 | 1469 | | 26 | 0.01 | 1.27 | 74M | 1M | 11GB | 3.85% | |
| p002 | Set 1 | 3 | 2492 | 1116 | 1347 | | 29 | 0.03 | 1.35 | 74M | 1M | 11GB | 4.76% | |
| p003 | Set 1 | 4 | 2565 | 1143 | 1397 | | 25 | 0.01 | 1.41 | 76M | 1M | 11GB | 3.85% | |
| p004 | Set 1 | 5 | 2426 | 1069 | 1333 | | 25 | 0.04 | 1.27 | 71M | 1M | 10GB | 1.96% | |
| p005 | Set 1 | 6 | 2419 | 1038 | 1359 | | 23 | 0.02 | 1.22 | 69M | 1M | 10GB | 4.76% | |
| p006 | Set 1 | 7 | 2519 | 1124 | 1368 | | 26 | 0.05 | 1.33 | 75M | 1M | 11GB | 2.91% | |
| p007 | Set 1 | 8 | 2590 | 1141 | 1420 | | 29 | 0.02 | 1.38 | 76M | 1M | 11GB | 3.85% | |
| p008 | Set 1 | 9 | 2439 | 1053 | 1358 | | 28 | 0.03 | 1.22 | 69M | 1M | 10GB | 2.91% | |
| p009 | Set 1 | 10 | 2601 | 1137 | 1443 | | 21 | 0.02 | 1.41 | 75M | 1M | 12GB | 2.91% | |
====================================================================================================================================================================
SQL Plan Monitoring Details (Plan Hash Value=625243985)
=====================================================================================================================================================================================================
| Id | Operation | Name | Rows | Cost | Time | Start | Execs | Rows | Read | Read | Cell | Mem | Activity | Activity Detail |
| | | | (Estim) | | Active(s) | Active | | (Actual) | Reqs | Bytes | Offload | (Max) | (%) | (# samples) |
=====================================================================================================================================================================================================
| 0 | SELECT STATEMENT | | | | 2386 | +127 | 1 | 1 | | | | | | |
| 1 | SORT AGGREGATE | | 1 | | 1 | +2512 | 1 | 1 | | | | | | |
| 2 | PX COORDINATOR | | | | 1 | +2512 | 11 | 10 | 3 | 81920 | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | 1 | +2512 | 10 | 10 | | | | | | |
| 4 | SORT AGGREGATE | | 1 | | 1 | +2512 | 10 | 10 | | | | | | |
| 5 | VIEW | | 8168 | 44 | 1 | +2512 | 10 | 10 | | | | | | |
| 6 | COLLECTION ITERATOR PICKLER FETCH | FUN1 | 8168 | 44 | 2511 | +2 | 10 | 20 | | | | | | |
| 7 | NESTED LOOPS | | | | 2511 | +2 | 10 | 39M | | | | | | |
| 8 | NESTED LOOPS | | 1 | 2 | 2511 | +2 | 10 | 48M | | | | | | |
| 9 | PX BLOCK ITERATOR | | 1 | 2 | 2511 | +2 | 10 | 39M | | | | | | |
| 10 | TABLE ACCESS STORAGE FULL | TABLE2 | 1 | 2 | 2511 | +2 | 147 | 39M | 6764 | 6GB | 62.69% | 106M | | |
| 11 | INDEX RANGE SCAN | TABLE3_IX3 | 1 | 3 | 2511 | +2 | 39M | 48M | 203 | 2MB | | | | |
| 12 | TABLE ACCESS BY INDEX ROWID | TABLE3 | 1 | 3 | 2511 | +2 | 48M | 39M | 1055 | 8MB | | | | |
=====================================================================================================================================================================================================
Predicate Information (identified by operation id):
---------------------------------------------------
10 - storage("TABLE2"."head_id"=xxxxxxxx AND "table2_c1"='F' AND "TABLE2"."table2_hesdstat"='XXX')
filter("TABLE2"."head_id"=xxxxxxxx AND "table2_c1"='F' AND "TABLE2"."table2_hesdstat"='XXX')
11 - access("TABLE3"."head_id"=xxxxxxxx AND "TABLE3"."FX_Instance_ID"="TABLE2"."FX_Instance_ID")
12 - filter("TABLE3"."FXID" IS NOT NULL AND "TABLE3"."STAT"='XXX' AND "TABLE3"."FXID"="TABLE2"."FXID" AND
"TABLE3"."gr_id"="TABLE2"."gr_id")
Note
-----
- dynamic statistics used: dynamic sampling (level=7)
- Degree of Parallelism is 10 because of table property