Fastest way to import the data for big tables
KunwarNov 10 2010 — edited Nov 10 2010Hi friends,
I have recently joined a billing product set up team for TELECOM giant.As part of the setup process /initial testing we need to put huge data (nearly 1.5 billion records/~200 GB size of data) in some tables. Currently i am using impdp for importing .It takes a lot of time (nearly >9 hours for loading this data).
I have tried the below ways to import the data(All these tables are partitioned):
1. Normal impdp (single thread i.e. parallel=1): This one takes a lot of time (>24 hours).
2. Normal impdp but partition-wise .This one is completed in relatively less amout of time compared to the 1st method.
3. Drop drop the indexes ,do inset append from another schema in same instance which has the data and then recreate the indexes ( total process takes ~9 hours)
My questions to all:
1.Is there any other way/trick in the book which I can try to put the data in less amount of time
2.What i have observed that even if I give parallel=8 or any number ,sometimes parallel workers are spawned and sometimes not. Can someone educate me why ?
3.How can i come to know (before running the impdp ) that my parallel clause is going to spawn the parallel workers or not. I have searched on this subject but no success as to how I will come to know this.
I dont know what strategy to follow ,because this is a involved and repetitive task for me. Due to the above task I am not able to concentrate on other DBA tasks due to this which come to me.
Cheers,
Kunwar