Index Size Increase (Rebuild with Parallel)
All,
I'm not sure whether or not this is an Exadata problem, but we saw it happen on our Exadata machine (maybe the larger extents are impacting this). However, I thought this forum might have an idea, especially if it’s a generic RDBMS issue instead of an Exadata one
I am running my Production EDW on an Exadata V2 quarter-rack running 11.2.0.2 BP7 RDBMS and GI and 11.2.2.3.2 ESS on OEL5.5
We have started to implement HCC compression due to space pressure. Last Thursday/Friday, we attempted to compress our largest table:
• 20bn rows, 3.7Tb in size, DEGREE 2, INSTANCE 1, 2555 partitions
• 8 bitmap partitioned indexes each with DEGREE 20, INSTANCE 1. Total size AFTER the rebuild: 1.3Tb (significantly greater than before!!)
• ‘Archive High’ HCC compression applied to the oldest 10% partitions (255).
• Remaining partitions were left uncompressed.
Because of the fact our biggest tables make use of bitmap indexes, in order for us to use HCC (which doesn’t like bitmaps), we had to:
• Disable all bitmap indexes
• Compress the oldest 10% of partitions with archive high HCC compression
• Rebuild the indexes for these partitions (255 total, 8 bitmap indexes each)
• Gather statistics
• Rebuild the indexes for the rest of the partitions (the remaining 90%). This took BY FAR the longest time.
During the last step, the index rebuilds seemed to take an excessive length of time – to rebuild all 8 bitmap indexes on one partition was about 3 minutes. This, obviously, had a big impact on the end users as this happens to be the table that most feeds hit (of course…!). We happened to run into the nightly batch too, which was as fantastic as you can probably imagine.
On Monday, we noticed that the tablespace for the indexes had ballooned by about 800Gb and figured out that the culprit was mainly the table’s indexes. Some index partitions had increased up to 20x (the degree of parallelism used to rebuild the index). Further research shows that we have seen this for other tables with varying results (if an index had DEGREE 8, we might see it increase by 8 times, etc).
We had the same issue in our Development environment as well. When we rebuilt the same indexes in Development (and Production) with no parallelism (DEGREE 1), we were able to rebuild 6 entire partitions (with 8 indexes each) in less than one minute: an 18x performance improvement. I suspect that the underlying table having DEGREE 2 allowed it to use the SmartScan, but still.
Obviously, increasing the index size by 20x was not meant to be part of the HCC compression results. We’ve saved 3.1Tb in total with HCC, but we have increased the size of the indexes just on this one table alone by 1Tb! However, we also saw this happen on index partitions which had NOT been compressed by HCC – which seems to rule out HCC being a factor in this (probably?)
We have seen this on another table which is partitioned with bitmap indexes. However, we only saw this increase in those indexes which had DEGREE >1. There were two other indexes on that table which were serial and saw NO increase in space usage.
I don’t know whether we see this because of…
• bitmap indexes
• bitmap partitioned indexes
• indexes on partitioned tables
• tables which use SmartScan
• HCC
• Exadata
• 11.2
• Larry Ellison hates me and wants to make my life miserable by introducing random errors just to annoy me
• All/any/none of the above
This sounds like it’s a bug to me with the indexes being parallelized. Anyone else seen this happen?
Any ideas, advice, jokes, cash appreciated...
Mark