Hi, We are using version 11.2.0.4 of oracle Exadata. It is type X5. Our infrastructure team recently enabled "FlashCache writeback mode" on this. And as per the doc it states, Write-back flash cache significantly improves the write intensive operations because writing to flash cache is faster than writing to Hard disks. Writeback is the default starting with X6 Exadata models and recommended by Oracle for X4 and above.
But suddenly after this implementation, we see few of the small queries which used to finish within seconds(.03 seconds/execution) were having increased execution time to ~6 seconds/execution for next 7-8hours and then getting back to normal then after. And these small queries were affecting because of high number of executions, so with increase in time per execution, the overall job execution time was increasing. After digging more into them i found no change in execution path and also the volume is same, it was mainly "cell single block physical read" which has increased for these queries. Then checking the overall USER I/O(cell single block physical read) trend for the database from DBA_HIST_SYSTEM_EVENT, i see ,it(cell single block physical read) has been significantly increased after the change implemented and its getting back to BAU after around 7-8 hrs+. I was not able to relate the symptom with the enabling of "FlashCache writeback mode". Want to know from experts how this can be logically related with each other?
I have few questions ,
1)Is it possible that during the activity for enabling "writeback flash cache" mode flash cache has flushed out completely and its the cache warmup due to which the "cell single block physical read" took longer initially and after ~8hrs they are coming back to BAU? Is it correct to think it can take ~8hrs for cache warmup? Is there some statistics(may be exadata flash cache related) which i can refer to confirm this thought?
2)Is there somewhere in oem i can see the flash cache metrics to find any matching clue?
Below is sample query with its execution plan and details from dba_hist_sqlstat.
INSERT INTO tab1(c1, c2, c3.....c39) VALUES ( :1, :2, :3, :4, :5, :6, :7,
:8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20, :21,
:22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32, :33, :34, :35,
:36, :37, :38, :39)
-------------------------------------------------
| Id | Operation | Name | Cost |
-------------------------------------------------
| 0 | INSERT STATEMENT | | 1 |
| 1 | LOAD TABLE CONVENTIONAL | | |
-------------------------------------------------
Note
-----
- cpu costing is off (consider enabling it)
Below query used to fetch the query specific stats:- After 28th Oct 4pm the query ran slow till 29th Oct 3AM.
SUM (q.EXECUTIONS_DELTA) executions,
ROUND (SUM (DISK_READS_delta) / GREATEST (SUM (executions_delta), 1), 1) pio_per_exec,
ROUND (SUM (BUFFER_GETS_delta) / GREATEST (SUM (executions_delta), 1), 1) lio_per_exec,
ROUND (SUM (rows_processed_delta) / GREATEST (SUM (executions_delta), 1), 1) rows_per_exec,
ROUND (SUM (cpu_time_delta) / GREATEST (SUM (executions_delta), 1), 1) cpu_time_exec,
ROUND (SUM (ccwait_delta) / GREATEST (SUM (executions_delta), 1), 1) concurrency_per_exec,
ROUND (SUM (IOWAIT_delta) / GREATEST (SUM (executions_delta), 1), 1) IO_per_exec,
ROUND (SUM (ApWAIT_delta) / GREATEST (SUM (executions_delta), 1), 1) apwait_per_exec,
round (( SUM (ELAPSED_TIME_delta)/ GREATEST (SUM (executions_delta), 1) / 1000000), 4) msec_exec
FROM dba_hist_sqlstat q, dba_hist_snapshot s
| SAMPLE_END | EXECUTIONS | PIO_PER_EXEC | LIO_PER_EXEC | ROWS_PER_EXEC | CPU_TIME_EXEC | CONCURRENCY_PER_EXEC | IO_PER_EXEC | APWAIT_PER_EXEC | MSEC_EXEC |
| 24-OCT-2018 11 | 67233 | 77.9 | 638.7 | 104 | 8865.5 | 0 | 33630.8 | 0 | 0.0428 |
| 24-OCT-2018 12 | 8527 | 72.7 | 624 | 104 | 8378.8 | 0 | 28958.2 | 0 | 0.0359 |
| 25-OCT-2018 11 | 69528 | 77.4 | 641.5 | 104 | 8728.2 | 0 | 31705.5 | 0 | 0.0416 |
| 25-OCT-2018 12 | 6188 | 64.2 | 625 | 104 | 7996.4 | 0 | 20054.2 | 0 | 0.0303 |
| 26-OCT-2018 11 | 76742 | 75.6 | 640.8 | 104 | 8581.8 | 0 | 28931.7 | 0 | 0.0373 |
| 26-OCT-2018 12 | 4135 | 61 | 622.7 | 104 | 7515.7 | 0 | 17421.7 | 0 | 0.0238 |
| 27-OCT-2018 11 | 68075 | 78 | 641.8 | 104 | 8726.8 | 0 | 31291.5 | 0 | 0.0416 |
| 27-OCT-2018 12 | 26287 | 67 | 630.3 | 104 | 7710 | 0 | 22730.3 | 0 | 0.0293 |
| 28-OCT-2018 16 | 372 | 144.5 | 574.2 | 104 | 38980.5 | 0 | 6343659.8 | 1.1 | 6.4138 |
| 28-OCT-2018 17 | 1436 | 106.3 | 579.6 | 104 | 16414 | 0 | 2477367.5 | 0 | 2.4957 |
| 28-OCT-2018 18 | 1969 | 99.6 | 582.4 | 104 | 13972.9 | 0 | 1810094.5 | 0 | 1.8264 |
| 28-OCT-2018 19 | 2018 | 102.8 | 580.3 | 104 | 14403.7 | 0 | 1759101.9 | 0 | 1.7779 |
| 28-OCT-2018 20 | 3047 | 108.2 | 580.7 | 104 | 13779.9 | 0 | 1156912 | 0 | 1.1774 |
| 28-OCT-2018 21 | 6822 | 98.2 | 580 | 104 | 11893.5 | 0 | 510079.5 | 0 | 0.5263 |
| 28-OCT-2018 22 | 15779 | 90.1 | 581.1 | 104 | 9850.4 | 0 | 218710.3 | 0 | 0.2271 |
| 28-OCT-2018 23 | 26541 | 76.8 | 578.8 | 104 | 8463.7 | 3.8 | 127027.9 | 0 | 0.1345 |
| 29-OCT-2018 00 | 3962 | 100.5 | 580.1 | 104 | 12248.8 | 0 | 889709.6 | 0 | 0.8995 |
| 29-OCT-2018 01 | 7681 | 91.8 | 580.4 | 104 | 10477.3 | 0 | 459808.3 | 0 | 0.4684 |
| 29-OCT-2018 02 | 23037 | 78.3 | 580.8 | 104 | 8350.8 | 0 | 117437.2 | 0 | 0.1247 |
| 29-OCT-2018 03 | 92638 | 70 | 598.9 | 104 | 7616.2 | 0 | 18209.4 | 0 | 0.0275 |
| 29-OCT-2018 11 | 73476 | 79.6 | 564.7 | 104 | 7859.2 | 0 | 29528.6 | 0 | 0.0375 |
| 30-OCT-2018 11 | 74093 | 77.1 | 579.2 | 104 | 7926.1 | 0 | 22989.5 | 0 | 0.032 |