I have a function that
copies any new rows from Database_A, Table_A to Database_
B, Table_A via a database link. This happens periodically, and there can be around 100k new records accumulated in between these periods.
I gather the records which need to be copied by filtering on a "copied" field on the table. Within the loop, I update the row's copied field so it won't be copied during the next pass. Here's the function:
FUNCTION copy_records RETURN INTEGER IS
pCount INTEGER := 0;
CURSOR recs IS
SELECT * FROM TABLE_A
WHERE FLG_COPIED = 'N' OR FLG_COPIED IS NULL
ORDER BY MYKEY; -- do the oldest first (only important if we commit during iterations)
BEGIN
pCount := 0;
FOR rec IN recs LOOP
-- first copy to backup db
INSERT INTO TABLE_A@BACKUP_DB
(FIELD_1, FIELD_2, FIELD_3)
VALUES
(rec.FIELD_1, rec.FIELD_2, rec.FIELD_3);
-- now flag as copied
UPDATE TABLE_A
SET FLG_COPIED = 'Y'
WHERE MYKEY = rec.MYKEY;
-- counter sent back for logging
pCount := pCount + 1;
END LOOP;
RETURN pCount;
EXCEPTION
WHEN OTHERS THEN
RETURN SQLCODE;
END;
END;
My problem is that it is blocking on the table while this process takes place. I would expect some row-level blocking, which is fine (this table is primarily INSERT only). But I'm not sure why it blocks such that it won't allow me to INSERT into the table.
Can anyone explain this to me?
If I do a COMMIT during each iteration, I can at least perform an Insert, but this seems to slow things down greatly when used across a DBLink, so I'd like to avoid it (plus I wish to make this function an all-or-none transaction). I'm also not sure if the effect of the COMMIT is to make the block row-only, or just minimize the window of the table locking.
Edited by: xaeryan on Oct 14, 2011 3:51 PM