Daily we are going to back up 6 schemas with a total size of 80 GB.
From oracle documentation I gather that PARALLEL servers work well when we split the dump file because each slave process can work with a separate file.
But I am not sure how many parallel processes should be spawned and the how many files this dump file has to be split?
The expdp command we are planning to use
expdp userid=\'/ as sysdba\' SCHEMAS = schema1,schema2,schema3,schema4,schema5,schema6 DUMPFILE=composite_schemas_expdp.dmp LOGFILE=composite_schemas_expdp.log DIRECTORY=dpump_dir2 PARALLEL=3
Related info:
11.2.0.2
Solaris 10 (x86_64) running on HP Proliant Machine
8 CPU with 32gb RAM
SQL > show parameter parallel
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean TRUE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean TRUE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 32
parallel_min_percent integer 0
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean TRUE
parallel_server_instances integer 2
parallel_servers_target integer 32
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0