Okay here's what I'm doing:
I'm exporting a full PeopleSoft database on Solaris 10 via EXPDP.
I'm running the job as the user SYSTEM and dumping both standard out and the job to a log file. The $TODAY is just a date stamp exported variable.
This is my execution line:
expdp PARFILE=expdp.par DUMPFILE=psptst.$TODAY.f%U.dmp USERID=SYSTEM/****** logfile=expdp_$ORACLE_SID.$TODAY.log
My parfile looks like this:
DIRECTORY=data_pump_dir
FULL=Y
PARALLEL=8
CONTENT=ALL
exclude=schema:"IN (select username from dba_users where user_id<54)"
I'm executing the command on the same server where the database resides, I have plenty of space. The server is a Solaris M5000 with 64 GB of RAM, plenty of CPU power and this is the only database running here. I also shut the database down and brought it up in restricted mode so nothing else can connect to it since this export is for migrating to Linux and 11.2. The current database is 11.1.0.7.5.
My export runs until it hits 99% full then just hangs for hours and never finishes. I've checked the jobs in TOAD and all the worker nodes are ACTIVE but the dump file, log file and standard out file are all NOT growing.
Here is the output of STAT when I connect to the data pump job:
Export> stat
Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 145,006,844,064
Percent Done: 99
Current Parallelism: 8
Job Error Count: 0
Dump File: /ora_exports/psptst/psptst.2013.Oct.14.1814.f%u.dmp
Dump File: /ora_exports/psptst/psptst.2013.Oct.14.1814.f01.dmp
bytes written: 27,062,280,192
Dump File: /ora_exports/psptst/psptst.2013.Oct.14.1814.f02.dmp
bytes written: 23,638,007,808
Dump File: /ora_exports/psptst/psptst.2013.Oct.14.1814.f03.dmp
bytes written: 10,316,234,752
Dump File: /ora_exports/psptst/psptst.2013.Oct.14.1814.f04.dmp
bytes written: 14,783,823,872
Dump File: /ora_exports/psptst/psptst.2013.Oct.14.1814.f05.dmp
bytes written: 22,908,686,336
Dump File: /ora_exports/psptst/psptst.2013.Oct.14.1814.f06.dmp
bytes written: 23,083,012,096
Dump File: /ora_exports/psptst/psptst.2013.Oct.14.1814.f07.dmp
bytes written: 23,221,575,680
Dump File: /ora_exports/psptst/psptst.2013.Oct.14.1814.f08.dmp
bytes written: 57,970,688
Worker 1 Status:
Process Name: DW01
State: WORK WAITING
Worker 2 Status:
Process Name: DW02
State: WORK WAITING
Worker 3 Status:
Process Name: DW03
State: WORK WAITING
Worker 4 Status:
Process Name: DW04
State: WORK WAITING
Worker 5 Status:
Process Name: DW05
State: WORK WAITING
Worker 6 Status:
Process Name: DW06
State: WORK WAITING
Worker 7 Status:
Process Name: DW07
State: WORK WAITING
Worker 8 Status:
Process Name: DW08
State: WORK WAITING
The status of all the workers waits are "wait for unread message on broadcast channel". The master worker says, "db file sequential read" as it's wait status.
I have tried this with parallel set to 4 and 16 and get the same result. I've also tried it with no parallel set and get the same result after about 2 hours it just hangs indefinitely.
I have noticed that once the +Bytes Processed: 145,006,844,064 hits this number it just stops regardless of the parallel level set. I checked and the database MAX_DUMP_FILE_SIZE is set to unlimited.
There are no old jobs in the dba_datapump_jobs and the status of my job is EXECUTING.
The alert log is clean just redo log rolls as I would expect.
This has me scratching my head as it appears the job is just running forever but going no where.