export / import with data pump of dedicated big partitions
746721Jan 15 2010 — edited Jan 21 2010Hi all,
I’m about to transfer a database to a different platform using data pump. Up to now I’ve always done such jobs via full export / import without any hassle. This database however is much too big ( > 2 TB) to do it the standard way.
The time of outage involved is just not acceptable.
I wonder whether this approach is practical.
Facts:
Version 10.2.0.4
complete database size: ~2 TB
the bulk of data is contained in two tables only (bigtableA and bigtableB)
the two concerned tables are partitioned on monthly interval
I thought of doing the transfer in a number of steps.
- Set up the new database from scratch
- Export/import metadata only
- Export/import all relevant schemas except bigtableA and bigtableB
- Export/import bigtableA:p1 (only the current partition )
- Export/import bigtableB:p1 (only the current partition)
- Put the new database online. Application should now work with current data
- After going online I’d import the remaining partitions which may take quite some time though.
The last point is where I start to run into problems.
How can I move (import) dedicated partitions into an existing table online?
I’ve used these commands:
Source:
Expdp directory=exp_dump dumpfile=bigtableA:p2.dmp tables=user.bigtableA:p2 job_name=expbigtableA
Target:
Impdp directory=exp_dump dumpfile=bigtableA:p2.dmp table_exists_action=append tables=user.bigtableA:p2 job_name=impbigtableA
I constantly keep getting ‘ORA-30036: unable to extend segment by 4 in undo tablespace 'UNDOTBS1'.
Up to now, I’ve not fiddled about with partitions in this way.
Is the illustrated proceeding reasonable?
Why is all the data filling up the undo tablespace?
If this is normal behavior, is there a way to not use this much of undo tablespace?
Sorry if I got it completely wrong when it comes to handling portioned tables with import/export.
I appreciate your suggestions
Louis