Minimal Supplemental Logging IMPROVES Performance
658997Sep 10 2008 — edited Sep 10 2008I have a rather unusual question.
I have been examining the use of Asynchronous Autolog Change Data Capture. Our main database system is under load, and we want to install a secondary system to take on the reporting duties. We need to extract the daily changes with absolute minimal impact on the source system. For this reason, AACDC seems the best approach.
I have been attempting to measure the performance difference for bulk updates in three cases.
a) No Supplemental Logging on Source.
b) Minimal Supplemental Logging on Source.
c) Logging ALL columns ALWAYS on our source table.
The shock result is this. Enabling Minimal Supplemental Logging actually makes the updates FASTER and makes the redo/archive logs SMALLER.
I inserted 500k rows into a table. Each batch updated 30k rows. A run consisted of 10 batches, with a commit and then a 5 second pause between each one.
To disable supplemental logging I used:
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;
To enable it with minimal logging, I used:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
To enable it always I used:
ALTER TABLE my_table
ADD SUPPLEMENTAL LOG GROUP my_log_group
(my_col1, my_col2, ...) ALWAYS;
I ran each test several times in a row, repeating until the figures stabilized (which they did very quickly) and until I got three consistent tests in a row. I measured the total size of the archive logs generated by a test (using ALTER SYSTEM SWITCH LOGFILE; to force a flush). I had plenty of decent sized redo logs, and the alert log showed no problems there.
For each scenario, I obtained three consistent runs (each of 10 batch updates). Here are the numbers.
NO SUPPLEMENTAL LOGGING
Average Batch Update Time (update + commit 50k) = 8.03 seconds
Total Size of Archive Logs Generated = 118Mb
MINIMAL SUPPLEMENTAL LOGGING
Average Batch Update Time (update + commit 50k) = 7.17 seconds
Total Size of Archive Logs Generated = 87Mb
ALL SUPPLEMENTAL LOGGING
Average Batch Update Time (update + commit 50k) = 18.93 seconds
Total Size of Archive Logs Generated = 183Mb
It's not at all surprising that the ALL LOGGING generates much larger log files and us much slower, but it's very surprising indeed that the MINIMAL logging is faster and smaller than the NO SUPPLEMENTAL.
Can anybody throw some light onto this?
I would mention that this table is unusual because it has NO primary key, (If it did have a primary key, I would have considered materialized views, but it doesn't, and tests show that adding a sequence-driven primary key and a materialized view log drops insert performance by 15%).