Hello Experts;
We've implemented data pump import using network approach and this part was implemented with usage of dbms_datapump Oracle package in the following way:
jobhandle := DBMS_DATAPUMP.OPEN('IMPORT','TABLE', 'PROD_DB_LNK', 'REFRESH');
dbms_datapump.metdata_remap(jobhandle,'REMAP_SCHEMA','DUMP_SRC','DUMP_TRGT');
dbms_datapump.metadata_filter (handle => jobhandle,name => 'JOB_1',value => 'SDP_HIST_FCT' , object_type => 'TABLE');
dbms_datapump.data_filter(handle => jobhandle,Name => 'SUBQUERY',value => 'where perd = 'A20151', table_name => 'SDP_HIST_FCT' , schema_name => 'DUMP_SRC');
dbms_datapump.set_parameter (jobhandle, 'TABLE_EXISTS_ACTION', 'TRUNCATE');
dbms_datapump.set_parameter(jobhandle,'ESTIMATE','STATISTICS');
dbms_datapump.start_job(jobhandle);
dbms_datapump.detach(jobhandle);
If we try to move this part ~12GB (one partition of the hist fact table) we face issue with UNDO tablespace after 21 h:
ORA-39171: Job is experiencing a resumable wait.
ORA-30036: unable to extend
segment by 8 in undo tablespace 'UNDOTBS1'
ORA-31693: Table data object "DUMP_TRGT"."SDP_HIST_FCT" failed to load/unload and is being skipped due to error:
If we do same but with dump files, so export + import it works well and can be done in less than 1h.
Details on UNDO:
------------------------------------ ----------- ------------------------------
| undo_management | string | AUTO |
| undo_retention | integer | 900 |
| undo_tablespace | string | UNDOTBS1 |
FILE_NAME /PDGTST/oradata02/DCP01/undotbs01.dbf
TABLESPACE_NAME UNDOTBS1
BYTES 34359721682
BLOCKS 4194300
AUTOEXTENSIBLE YES
Oracle database: Oracle Enterprise Edition 10.2.0.5
I tried to explore this forum and other forums before I raised this point but I found only that indexes can be a problem and this is not case here because using export/import and indexes are maitained during import.
Please advise what should we check or maybe our implementation for network appropach is incorrect (above).
Regards,
Bolo