Thsi is orcale 10g release 2 on windows xp.
We have a pl/sql batch job which is supposed to mimic certain processes on our live system.
It includes 2 triggers which fire whenever a particular status gets updated. These call a procedure which inserts data into a table on the instance but this procedure also inserts data into a different instance on the same machine via a database link. (It has to work like this unfortunately).
This is currently running on a windows xp machine (but will eventually run on an rac cluster on an aix box).
I had 50,000 records to import which took just under 90 minutes using row by row processing when I did not have the triggers enabled, hence everything was happening on local machine).
When I added the triggers and the remote procedure call , the job is still going but the system is not showing any cpu usage and the disk is thrashing like crazy.
On our system test machine we are experiencing similar behaviour.
We can import a certain number of records, but once we get over a certain number, the cpu usage drops.
When we stop and restart the instances our throughput increases dramatically, and oracle shows its using plenty of cpu.
The other thing to note is even our direct inserts are relatively slow
Any idea of the likely cause and cure for this?
I have run a statspack report and here are the top 5 wait events
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
CPU time 366 84.9
db file sequential read 2,072 19 9 4.5
log file parallel write 1,933 14 7 3.2
db file scattered read 1,888 10 5 2.4
control file sequential read 1,615 5 3 1.2
-------------------------------------------------------------