Hi,
we have a Streams based cluster of two nodes, one active and the other standby. We're performing some data manipulation to update certain non key fields, but we use a UK Indexed key for the update.
We noticed some performance degradation when executing on the active side.
Explained plan for the single operation:
explain plan for
update S04_SIM_RTDB set SOGID=296, SCP_NAME='OCSQDC', V2805_SPARE_STRING11=V1012_SPARE_STRING_6 where SIM_KEY='97477576222' --1010916014;
shows:
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2770857907
--------------------------------------------------------------------------------
--------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Ti
me |
--------------------------------------------------------------------------------
--------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 1 | 1587 | 3 (0)| 00
:00:01 |
| 1 | UPDATE | S04_SIM_RTDB | | | |
|
|* 2 | INDEX UNIQUE SCAN| UK_S04_SIM_RTDB01 | 1 | 1587 | 2 (0)| 00
:00:01 |
--------------------------------------------------------------------------------
--------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SIM_KEY"='97477576222')
14 rows selected.
The optimizer seems to get the proper index but still the update is slow. We have about 2.6M entries and updating 60 rows takes one second roughly.
On the mated (standby) DB with exactly the same layout information (we align them basically copying the datafiles) the same explain plan differes for the byte value:
--------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Ti
me |
--------------------------------------------------------------------------------
--------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 1 | 33 | 3 (0)| 00
:00:01 |
| 1 | UPDATE | S04_SIM_RTDB | | | |
|
|* 2 | INDEX UNIQUE SCAN| UK_S04_SIM_RTDB01 | 1 | 33 | 2 (0)| 00
:00:01 |
--------------------------------------------------------------------------------
When using set autotrace on the difference shown by the stats report is in the redo size being
1160 redo size
vs
9216 redo size
The latter being the slow one, the first being 10x faster (on standby DB abot 600 updates/sec).
The nodes are exactly the same, running same OS (RHEL 3.10.0-693.11.6.el7.x86_64) and DB (12.1.0.2.0) levels
I tried already to regenerate the statistics on that table on both machines and no change in the values reported by the autorace nor by the testing executed on 1000 similar updates (using different keys).
Do you have any idea about where to look into?
haven't re-generated the UK index so far being a productoin system and it may have killed the performances, but may try if you think it's worth.
Main difference is this amount of bytes and redo log discrepancies, I assume the same update using the same index and performing the same action should not cause those differences. Both nodes are running 8K block sizes tablespaces
Thanks in advance!
Mike