overhead from setting statistics_level = ALL
I am trying to test the query timings using TKPROF.
To better understand a full table scan time on an aggregate I am turning on the session level command "ALTER SESSION SET STATISTICS_LEVEL = <VALUE>;
First with default TYPICAL for STATISTICS LEVEL, when I run the query I get 1.58 seconds for CPU time with 0.69 sec spend on "direct path read" wait time. The full table scan itself takes 3.6 seconds as shown below:
SELECT count(1)
FROM T1
where n1 <= 8
and ind_pad <= rpad('x',39)||'x'
and n2 < 15
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 1.58 1.58 444521 444528 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 1.58 1.58 444521 444528 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91 (MICH)
Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=444528 pr=444521 pw=0 time=0 us)
3242364 TABLE ACCESS FULL T1 (cr=444528 pr=444521 pw=0 time=3607418 us cost=121296 size=152271540 card=3239820)
Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
3242364 TABLE ACCESS MODE: ANALYZED (FULL) OF 'T1' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
Disk file operations I/O 2 0.00 0.00
db file sequential read 3 0.00 0.00
direct path read 3489 0.00 0.69
db file scattered read 1 0.00 0.00
SQL*Net message from client 2 0.00 0.00
********************************************************************************
However, if I switch the session level statistics to "ALL" I get the CPU time of 5.29 Sec with evertying else almost the same. Is this difference of 5.29 sec vs 1.58 sec CPU time is purely due to overhead of statistics_level set to ALL? The problem is that as I understand the query timing in chilld's execution plan output shows meaningful values with STATISTICS_LEVEL=ALL. I am confused here what turning on this parameter adds so much CPU overhead?
SELECT count(1)
FROM T1
where n1 <= 8
and ind_pad <= rpad('x',39)||'x'
and n2 < 15
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 5.29 5.28 444521 444528 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 5.29 5.29 444521 444528 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91 (MICH)
Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=444528 pr=444521 pw=0 time=0 us)
3242364 TABLE ACCESS FULL T1 (cr=444528 pr=444521 pw=0 time=3391894 us cost=121296 size=152271540 card=3239820)
Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
3242364 TABLE ACCESS MODE: ANALYZED (FULL) OF 'T1' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
Disk file operations I/O 2 0.00 0.00
db file sequential read 3 0.00 0.00
direct path read 3489 0.00 0.69
db file scattered read 1 0.00 0.00
SQL*Net message from client 2 0.00 0.00
********************************************************************************