Skip to Main Content

Oracle Database Discussions

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

High PX qref latch waits

kenkrugOct 30 2013 — edited Oct 30 2013

Our multi-terabyte DW system typically has high PX qref latch waits during direct path loads with parallelism. Below is a sample statement we’re running as a benchmark. The source table has 100 mill rows. The target table is truncated just prior to the load.

The database is 11.2.0.3.8 running on a RHEL6 VM with 512G RAM and 16 CPU's. We run the benchmark when the server is idle and 50% of wait time is for PX qref latch. This gives us about 275,000 transactions(insert) per second(tps). This is with the parallel_execution_message_size (PEMS) set to the default value of 16KB. We've improved throughput and reduced PX qref latches by increasing PEMS to its max value of 64k. Throughput went to 320k tps.  But if I have another parallel process running with 32 slaves and no other active sessions, we see significant increases in PX qref latch waits and 1/3 less thoughput from our benchmark.

With 16 CPUs I'd expect running  40 concurrent parallel slaves would not negatively impact performance, but from the increase in PX qref latches and reduced throughput it seems it does.

We've reviewed SAR disk data when testing is going on and IO seems good. Avg svctm is always under 3 on busy disks.

Any suggestions on how to reduce PX qref latches or other tuning suggestions?

INSERT /*+ append */ INTO ops$ORACLE.TEST_TP

NOLOGGING (select /*+ parallel(t,8) */ * from ops$ORACLE.TEST_TP2 t)

init.ora values

parallel_adaptive_multi_userbooleanTRUE
parallel_automatic_tuningbooleanTRUE
parallel_degree_limitstringCPU
parallel_degree_policystringAUTO
parallel_execution_message_sizeinteger65536
parallel_force_localbooleanFALSE
parallel_instance_groupstring
parallel_io_cap_enabledbooleanFALSE
parallel_max_serversinteger160
parallel_min_percentinteger0
parallel_min_serversinteger0
parallel_min_time_thresholdstringAUTO
parallel_serverbooleanFALSE
parallel_server_instancesinteger1
parallel_servers_targetinteger64
parallel_threads_per_cpuinteger1
recovery_parallelisminteger0
pga_aggregate_targetbig integer225G
sga_targetbig integer275G
memory_targetbig integer0
filesystemio_optionsstringSETALL
Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Nov 27 2013
Added on Oct 30 2013
2 comments
3,635 views