OG post mentioned max 7 hrs load of unspecified data volume xfer btw 2 db
Worked on project quite some time ago. lessons learned
- list/ interval partition by date / timestamp the source table. ideally in it's own tablespace. use local indexes
- if you get push back over values to partition create a new table containing current date then create a view that joins both tables.
- use datapump to transport the tablespace from src to target db. plan b would use exchange partition.
- rebuild the local indexes.
this should be way faster then pulling data using sqlldr or datapump. Works ideally for say a daily trading system where src db
really / ideal performance is having the fewest rows in the table. Enquries can be done on target db.
If you're pulling data from non oracle dbs then use golden gate to load a staging table then use sql.ldr with direct path load parallel a high value on an shared nfs mount.