Oracle 11.2.0.1.0 Std Ed ONE, on OL 5.6 64bit
We have been working to tighten a daily procedure that recently went from an average run-time of 15 minutes to nearly 2 hours. I could never explain the change itself (the good times were before I started to work here, and there was no baseline measurement). I started tracing the job, and the tkprof reports suggested the job had good opportunities to improve it even though that in itself doesn't explain the
change in run time.
The developer made two changes. First, she modified a big update statement so that it wasn't trying to update rows with values that were already there. That made a huge difference that was easily seen in the traces.
For the other change, she modified a cursor FOR loop to use a bulk collect, but she put a limit on the fetch, which I'll point out in the skeleton code below. Here's where I have a couple of questions.
First, I've not found anything in the tkprof report that lets me put a firm number on the improvement due to the introduction of the bulk collect. The series of statements being process
inside the loop are, of course doing about the same work as always - varying a bit from day to day. But I can't find what would measure the reduction in work created by changing 10000 individual FETCHes to 10 BULK fetches of 1000 rows each.
The second question is what would be the justification of limiting the bulk fetch to 1000 rows in the first place? The developer said that was just a rule of thumb she had always been taught. I have a gut feel that it is better to do just a single bulk fetch of however big it is, but am not versed enough in it to lay out the case. Any thought's pro or con here?
FWIW, here is a skelotonized version of the entire procedure. At this point I'm not interested in individual statements so much as overall structure, especially concerning that limit on the BULK fetch. Consequently, I have stripped out most of the local variable declarations as well as most detail of individual sql statements.
PROCEDURE rebuild_st_order_req_summary
IS
--
v_fetch_limit NUMBER := 1000;
--
v_msg VARCHAR2 (200);
CURSOR v_sugg_qty_recs_to_calc
IS
SELECT <column list>
FROM ....
WHERE ....
ORDER BY ....;
---
--- any logic for an ORDER BY in a cursor?
---
TYPE v_table_type IS TABLE OF v_sugg_qty_recs_to_calc%ROWTYPE
INDEX BY BINARY_INTEGER;
v_rec v_table_type;
BEGIN
-- <snip other stuff>
OPEN v_sugg_qty_recs_to_calc;
LOOP
FETCH v_sugg_qty_recs_to_calc
BULK COLLECT INTO v_rec
LIMIT v_fetch_limit;
--
-- here's the big one ... any justification for the limit?
--
FOR i IN 1 .. v_rec.COUNT
LOOP
v_temp_qty :=
( v_rec (i).wc_qty
+ v_rec (i).ev_qty
+ GREATEST (v_rec (i).as_qty,
v_rec (i).sr_qty,
v_rec (i).fc_qty,
v_rec (i).im_qty));
BEGIN
SELECT NVL (si.usable_physical_qty, 0)
INTO v_store_inv_qty
FROM ...
WHERE ...;
IF v_store_inv_qty <= 0
OR v_store_inv_qty IS NULL
THEN
v_store_inv_qty := 0;
END IF;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
v_store_inv_qty := 0;
END;
BEGIN
SELECT NVL ( (SUM (pod.adjusted_item_qty) - SUM (pod.received_qty)), 0)
INTO v_total_on_order
FROM ...
WHERE ...;
IF v_total_on_order < 0
THEN
v_total_on_order := 0;
END IF;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
v_total_on_order := 0;
END;
BEGIN
SELECT NVL (SUM (a.order_quantity), 0)
INTO v_wc_on_order
FROM ...
WHERE ...;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
v_wc_on_order := 0;
END;
v_non_wc_on_order := v_total_on_order - v_wc_on_order;
UPDATE smt.store_order_request_summary a
SET ...;
END LOOP; -- bulk fetch loop
COMMIT;
v_rec.delete;
EXIT WHEN v_sugg_qty_recs_to_calc%NOTFOUND;
END LOOP; -- cursor loop
CLOSE v_sugg_qty_recs_to_calc;
-- <snip other code not of interest>
END rebuild_st_order_req_summary;