Empty Blocks + minimizing space consumption
829112Jan 6 2011 — edited Jan 7 2011I am loading a database for OLAP/DSS purposes.
So, the tables are essentially static -- after the load, no inserts, updates or deletes, only readonly access by the users.
The tables are either replaced periodically, or are partitioned, providing a slidiing window of data.
1) Typical table creation =
create table &oracle_snapshot
pctfree 0
compress
parallel 4
as select * from temp_snapshot ;
2) The primary key is created and attached
3) The table is loaded
4) The other ndexes are created -- create bitmap index ...
5) Empty blocks are deallocated
alter table &oracle_snapshot deallocate unused
6) The table is analyzed
analyze table &oracle_snapshot compute statistics
7) Now I run select * from user_tables and see that the table has
13045298 rows
845271 blocks
1321 empty blocks
I don't understand this
Shouldn't there be 0 empty blocks?
I've tried three flavors of defining a table, with the intent of making the most efficient use of storage.
(I know I'm quiveling of < 2% of the table's resulting size)
1) initial 64k next and pctincrease are defaults --- which supposedly is 64k and 50%
2) initial 345M next 1M pctincrease 0
3) initial 6764608K next and pctincerase are defaults
The 67646... value was arrived at by querying the number of blocks the prior table consumed, used as an initial estimate for the new table.
(The table is replaced periodically, as this is more efficient than trying to update the existing table)
The results are strange to me.
Blocks Empty Total
845271 1321 846512
845576 1272 846848
845507 1213 846720
The first method seems to consume the least space, where I would expect it to consume the most as the extents grow large
I would expect #3 to have the fewest empty blocks, which it does.
Actually, I was expecting "blocks" to be the same for each, and only the number of empty blocks to change.
This must have something to do with definig the table to use compression, right ?
But it still doesn't make much sense to me/
Why would defining a larger initial extend cause the table to use more overall blocks?
Also, running "deallocate unused" and then "analyze" does not change the number of "empty blocks"
So, there must be two kinds of empty blocks, those never touched, and those freed up through compression ?
Trying to use SHRINK SPACE results in an error because of the COMPRESS attribute on the table.
Is there a way to release those touched empty blocks?
(coalesce is simply shrink space compact)
So, any useful explanations as to why this is happening this way?