Undo read performance - any solutions except faster disks or less undo?
324726Oct 10 2008 — edited Oct 14 2008Hi,
A pretty frequent issue I am facing in Oracle is that some SQL is running slow because a lot of changes have been
done to the underlying tables. At the moment I am using Oracle 9.2, so I am wondering whether we can expect any performance improvement in newer Oracle releases with regards to the algorithms for getting the right undo records for achieving a consistent read. It seems to me that when this problem hits there is an order of magnitude in performance degradation in reading a database block from undo instead of from the normal datafile/tablespace. A job that under normal circumstances takes 2-3 hours can fail after 12 hours because of this problem.
My approach to this issue so far has been a combination of :
1. Tuning SQL to run faster. Decreases likelihood of needing to go to the undo tablespace.
2. Tuning the schema. E.g. using materialized views to avoid changes to underlying tables, adding indices, etc. Not possible in all cases though.
3. Tuning job schedules. E.g. make sure we don't run heavy batch updates at the same time as the job. We can't
do anything about heavy online updates though.
4. Tuning parallelism on Oracle and application level.
But it seems to me that there must be some opportunity to improve how Oracle actually reads these undo records so the performance impact of doing a consistent read is reduced.
Do any of you know of improvements in Oracle that address the issue or additional workarounds except getting faster IO.
regards,
Gunnar