I have a table of 500 million records with 5 partitions (with row movement enabled) on column: STATUS, with each partition holding 100 million records.
I'm running an update using a FORALL loop with limit of 10,000 to update STATUS (to move records to another partition), with commit in the loop.
Expected rows to be updated are 140 million.
The table has 12 columns (either number or varchar datatype) and 4 indexes.
It's currently updating at a rate of 40,000 records/minute. By this estimation, it would take 46 hours.
Is this expected behavior? I think the performance can be improved further, but I'm unable to find another approach.
Are there any DB parameters I can check with our DBAs to make this more efficient?
Working on Oracle 19c Production. Window 10
DECLARE
TYPE emp_array_type IS TABLE OF emp.emp_num%type INDEX BY BINARY_INTEGER;
emp_array_object emp_array_type;
fetch_size NUMBER := 10000;
CURSOR emp_cursor IS
SELECT emp_num from emp_num_temp;
BEGIN
OPEN emp_cursor;
LOOP
FETCH emp_cursor BULK COLLECT
INTO emp_array_object LIMIT fetch_size;
FORALL I IN 1 .. emp_array_object.count SAVE EXCEPTIONS
UPDATE emp
SET status_num = 46,
userid = 1,
Lastmod = sysdate WHERE emp_num = emp_array_object (I) ;
COMMIT;
EXIT WHEN emp_cursor%notfound;
END LOOP;
CLOSE emp cursor;
END;