hello all,
i've reviewed some OTN threads on this, but i couldn't find one that specifically addresses the issue of why inserts would get slower and slower over time.
i have read the "my query runs slow" link -- but this is about inserting data. i've tuned the select statements as much as i can at this point.
i am running a data conversion (stored procedure) that inserts 30-40 million rows and it has run for over a week. at the beginning and end of every day i do a count on the records in the target table. the first million rows go in a respectable speed but it gets slower and slower over time. for example, at 18 million rows, it was inserting 345K per hour. at 25 million rows it was inserting 228K per hour. at 26 million rows it was inserting 127K per hour.
i'm not sure what to do about the insert statements. It is inserting with bind variables, and there is one unique index on this table. the unique index is composed of two foreign keys that point to two other tables.
INSERT INTO stu_schedule_detail (pk_id, fk_stu_schedule,
fk_stu_attendance,
fk_semester, special_days,
class_voc_out_iep)
VALUES (l_pk_stu_schedule_detail,
l_pk_stu_schedule, z.fk_stu_attendance,
decode(l_meets_semester,y.semester,'',l_pk_id_semester),
decode(l_meets_days,y.days,'',y.days), x.voc_outside_iep);
CREATE UNIQUE INDEX STU_SCHEDULE_DETAIL_I ON STU_SCHEDULE_DETAIL(FK_STU_SCHEDULE,FK_STU_ATTENDANCE)
TABLESPACE sis_express_index
STORAGE (INITIAL 4096
NEXT 4096
PCTINCREASE 0);
Additionally - we had a power-rleated crash and our DBA was able to recover the records that were in the process of being inserted. She is importing the dmp of the recovered table (27 million rows). the import has been running since 10 am on august 4. The unique index is enabled during the import.
My questions are:
-- why do the Inserts slow down over time? and is the performance i am seeing reasonable? is the import running at a reasonable time?
-- we have to preserve the unique index due to possible duplicate records from the data conversion (the input data and output data are in very different architecture - not a one-for-one correspondence); is there a way to preserve the unique index but bypass the default oracle processing or manage the overhead in writing the records better?
(i was thinking about writing the data to a flat file then using sqlldr direct load to load the data, but i would still need to have the unique index in place and i'm not sure if that would buy me anything.)
Microsoft Windows [Version 6.0.6002]
SQL*Plus: Release 11.2.0.2.0 Production on Mon Aug 8 11:12:16 2011
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
thanks in advance,
Mariann Davis