Hello,
I need to delete several million rows from an audit table. The rows that need deletion are for a certain audit type, audit_type='READ'. Rows with this audit_type are from a time period 10th Feb 2022 -> 17th Feb 2021. There are 344 days in this period with rows where audit_type='READ'. The date that the audit_type, occurred is a field on the table. Each day in this period has other audit_type entries e.g. audit_type='UPDATE' etc.. The option to audit read operations was switched off so, no audit_type='READ' has been generated since 10th Feb 2022. The total number of rows to delete is 417330322. A typical break down of number for read audit rows per by date is:
05/03/2021 1460740
04/03/2021 1378231
03/03/2021 1375498
02/03/2021 1723104
01/03/2021 1726566
28/02/2021 391489
27/02/2021 322497
26/02/2021 1177901
25/02/2021 1249281
24/02/2021 1083606
23/02/2021 1188044
My original approach was to secure down time for the application that uses this database to guarantee no database activity, and to use an anonymous block in plsql to generate a cursor for dates that have audit_type='READ' rows then loop through the cursor and delete the audit_type='READ' rows for this date and commit after each delete statement in the loop. This deletes the audit_type='READ' on a day-by-day basis and commits after the data has been deleted for each day to prevent any issues with UNDO.
I have since read on this forum that a commit shouldn't be run in a loop: https://community.oracle.com/tech/developers/discussion/2551616/delete-and-commit-data-in-a-plsql-loop
In light of this I've used a loop to generate a delete statement for each required day, put these into a script and put a commit after every 20 delete statements. To check whether or not this will cause UNDO issues, I've looked at calculating the size of data deleted in each statement, because at the moment it is just a guess that there won't be any issues with the database in essentially single user mode. The data on dba_segments doesn't take account of values in the columns. I could sum up the dba_segments values for whole table and then guesstimate the UNDO required.
Given that post https://community.oracle.com/tech/developers/discussion/2551616/delete-and-commit-data-in-a-plsql-loop states:
- Never do in PL/SQL what can be done with plain SQL.
- NEVER issue COMMIT inside any LOOP. Do so makes elapsed time longer & increases odds that ORA-01555 will be throw.
Is there any way to guarantee I won't have ay UNDO issues with the delete?
This work will be run over the w/e with the application down and with my session the only session issuing DML.