DB: 11.2.0.3
OS: Linux, kernal 2.6.18
I have the following three column table,
Col1 not null number
Col2 not null number
Col3 not null number
Col1 is the primary key. There is a unique index on Col2/Col3 combined. There is a non-unique index on Col3.
Table has around 8 billion records and it is not partitioned.
Once a month, I have to delete approximately 200,000,000 records. I know this is a generic question, but I would appreciate any suggestions on the best way to accomplish this. I have been deleting in chunks of about 5,000 records at a time but that is taking longer than I would like. Am I missing some obvious way to solve this problem.
For background, when this table/application was designed about eight years, we were never going to delete data from it. Now there are some new "policies" concerning some of the data that resides here causing this new delete requirement.