Hi Guys,
I have a problem with DBMS_PARALLEL_EXECUTE. I have a table with 13 million+ records where I'm trying to do an update using DBMS_PARALLEL_EXECUTE. I used a chunk size of 100,000 and the number of chunks created was close to 33,000. To process this, it took around 1.5 hours with 60 as parallel_level.
While analyzing this issue, we came across the data from another site. There around 8 million+ records are there and the number of chunks is just 149. The process took a little bit more than 3 minutes. The main difference I could see was the number of chunks.
Could you please help in understanding whether the number of chunks matter? If they matter, why for the same table, different number of chunks are getting created at two databases? Now, if this is an issue, can we resolve this - by means of reducing the number of chunks created, somehow.
I tried using the DIY method and manually chunked the table into 150 pieces - the processing took around 6 minutes. We would like to go with DBMS_PARALLEL_EXECUTE itself if the issue can be resolved - please help.
Database version is: 11.2.0.3
No. of virtual CPUs: 32
(If any more details are required, please do let me know). There are no significant wait events observed in the AWR report for the duration of the run.
Regards,
Sree