Skip to Main Content

Oracle Database Discussions

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Data fetch Performance of Global Temp table

user10472047Nov 11 2020 — edited Dec 17 2020

Its version 12.1 of Oracle exadata. We are having optimizer feature enable set as 11.2.0.4.
We have a scenario, which utilizes global temporary table for fetching data after loading the global temporary table through some complex business functionality inside plsql procedure and its happening through a third party tool. Those data fetch sqls are submitted into the database in a nested loop fashion each time for different literals passed to column GTT2.CST as filter and the resulted aggregated data is written to some file. This consumes significant time in database while reading from those GTTs. And as because each of the iteration takes ~30 seconds+ and its happening ~500 iterations(for ~500 different literals), so it runs for 4-5hrs. But again i saw, parsing time is very less as compared to actual execution time.
We have informed team to make the row by row processing changed to bulk(by holding all the literals and querying once in the database) so that it can be executed once with lesser resource utilization. But considering this is third party tool we are seeing some complexity and push back. And we are trying to do as much as we can from our database end. We tried with multiple option of changing the execution path or by someway we can minimize execution time of each iteration of the query , but its not getting us much benefit. Mainly because all of the DB time is spent on the scanning/filtering data out of the GTT(here GTT1) which holds millions of rows.
1)So though we have communicated to have the processing done as bulk and that will need larger change in code and design, we are trying to have some short term fix to make this processing faster. We were thinking , as because its global temporary table , all of those millions of rows were lying on disk, so disk read must be taking time and resources. So is there anyway we can hold these GTT data in cache(through some hints/profiles etc) so that the reads can be made faster for that duration?
2)Or Is it correct that , in this case, in sql monitor , as we see in first case there is ~3Gb of tempspill, so in that case it was reading from TEMP/Disk but in second sql monitor as there is no temspspill , so it was fully executing in PGA i.e. memory only(just like Buffer cache). So in second sql monitor the query cant made faster by moving the table into cache. Is my understanding is correct? And also as these are full table scan plans, so its is advisable to try PARALLEL(2) hint plan forced through profile and it will help us cut-down the execution time to around by ~50%?
3) Another point raised by team members , suggesting, if we will move to write back flash cache mode(we are currently having write through flash cache mode enabled), that will make the read from GTTs faster. So want to understand from experts if that is correct analogy and how?
Attached below is sample sql monitor of the query:

This post has been answered by user9540031 on Nov 13 2020
Jump to Answer
Comments
Post Details
Added on Nov 11 2020
9 comments
1,577 views