Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
seems obvious, the answer would be no, null values don't take up space in pl/sql table in memory.
but i'm not sure how the space is allocated before it's filled in with data.
since i have potentially millions of rows in my pl/sql table, the difference could be enormous.
i am not familiar with what memory space in the DB these are stored in, or how the space is allocated or extended.
here is sample table:
TYPE det_rectype IS RECORD (pk_id varchar2(32),
fk_orderno varchar2(32),
order_notes varchar2(5000), *
user_fld1 varchar2(100), *
user_fld2 varchar2(100), *
user_fld3 varchar2(100), *
user_fld4 varchar2(100), *
user_fld5 varchar2(100), *
transaction_id varchar2(32),
transaction_number number(8),
transaction_type varchar2(1),
last_update_date date,
last_update_user varchar2(100));
TYPE det_tabtype IS TABLE OF det_rectype INDEX BY BINARY_INTEGER;
l_det_table det_tabtype;
if for example, the record size is 5,713 bytes if all data fields are filled in, then loading 1 million rows would consume 5,713,000,000 bytes in memory.
however, if the data i'm loading have almost no data in some of these fields (with asterisks above) meaning that in the middle there are 5500 bytes that are null, does that mean 1 million rows still takes up 5,713,000,000 bytes, or does it now take up 213,000,000 bytes?
thanks for any insight.