Statspack analyzer Report
561232Mar 17 2008 — edited Mar 21 2008Hi,
This which my server hitting IO % upto 90% .. So any one can find the issues in my Stats pack report which i got from statspack analyzer.
My OS is Sun Solaris 5.1 and oracle 8.1.6
Database Info Mon Mar 17 15:26:10 EDT 2008
DB ID Instance Release RAC Host
3514734651 bfip2 8.1.6.3.0 NO pdb5
Elapsed: 12.58 (min) 754.8 (sec)
DB Time: 225.5 (min) 13,529.79 (sec)
Cache: 2,657 MB
Block Size: 16,384 bytes
Transactions: 3.87 per second
Performance Summary
Physical Reads: 53/sec MB per second: 0.83 MB/sec
Physical Writes: 8/sec MB per second: 0.13 MB/sec
Single-block Reads: 176.65/sec Avg wait: 1.58 ms
Multi-block Reads: 0.78/sec Avg wait: 9.44 ms
Tablespace Reads: 0/sec Writes: 0/sec
Top 5 Events
Event Percentage of Total Timed Events
PX Deq: Table Q Normal 68.21 %
PX Deq: Execution Msg 14.03 %
PX Deq Credit: send blkd 11.57 %
latch free 2.03 %
db file sequential read 1.56 %
Top 5 Events
Event Percentage of Total Timed Events
PX Deq: Table Q Normal 68.21 %
Other
PX Deq: Execution Msg 14.03 %
Other
PX Deq Credit: send blkd 11.57 %
Other
latch free 2.03 %
Other
db file sequential read 1.56 %
The sequential read event occurs when Oracle reads single blocks of a table or index. Look at the tablespace IO section of the report for tablespaces with less than 2 average blocks per read, high response time, and a large percentage of the total IO. Improving the response time of these tables with faster storage will help reduce this wait event and speed up the database. Moving the data files with the largest amount of time spend waiting on single-block reads to faster storage can significantly reduce the amount of time spent waiting on this event. By reducing the time spent waiting on this event, the database performance could increase 2%.
Tablespace I/O Stats
Tablespace Read/s Av Rd(ms) Blks/Rd Writes/s Read% % Total IO
IDX4BFIP2 0.7 100% 85.36%
DAT5BFIP2 9.4 47% 3.57%
DAT3BFIP2 8.5 55% 2.52%
DATTMP 9.3 98% 1.63%
DAT1BFIP2 7.2 95% 1.3%
IDX3BFIP2 10 88% 1.15%
DAT7BFIP2 8.2 82% 0.94%
IDX5BFIP2 13.6 91% 0.77%
DAT6BFIP2 9.9 100% 0.7%
DAT2BFIP2 9 58% 0.49%
Tablespace I/O Stats
Tablespace Wait (s) Read/s Av Rd(ms) Blks/Rd Writes/s Read% % Total IO
IDX4BFIP2 86,064.29999999999 0.7 100% 85.36%
DAT5BFIP2 48,306.6 9.4 47% 3.57%
DAT3BFIP2 30,889 8.5 55% 2.52%
DATTMP 21,817.800000000003 9.3 98% 1.63%
DAT1BFIP2 13,485.6 7.2 95% 1.3%
IDX3BFIP2 16,600 10 88% 1.15%
DAT7BFIP2 11,053.599999999998 8.2 82% 0.94%
IDX5BFIP2 15,177.6 13.6 91% 0.77%
DAT6BFIP2 10,028.7 9.9 100% 0.7%
DAT2BFIP2 6,363 9 58% 0.49%
Load Profile
Logical reads: 6,860/s Parses: 121.01/s
Physical reads: 53/s Hard parses: 0.16/s
Physical writes: 8/s Transactions: 3.87/s
Rollback per transaction: 0% Buffer Nowait: 100%
0 Recommendations:
Instance Efficiency
Buffer Hit: 99.22% In-memory Sort: undefined%
Library Hit: 99.99% Latch Hit: 80.06%
</TABLE
1 Recommendations:
Your latch hit percentage is 80.06%, below the recommended value of 99%. Investigate the specific latches in your latch activity section and tune to reduce library cache contention.
Wait Events
Event Waits Wait Time (s) Avg Wait (ms) Waits/txn
PX Deq: Table Q Normal 20,023 9,228.67 461 6.9
PX Deq: Execution Msg 89,035 1,898.19 21 30.5
PX Deq Credit: send blkd 2,891 1,565.37 541 1.0
latch free 308,673 274.58 1 105.7
db file sequential read 133,334 210.76 2 45.7
PX Deq: Signal ACK 8,918 173.03 19 3.1
PX Deq: Parse Reply 10,982 58.72 5 3.8
PX Deq: Execute Reply 25,684 25.41 1 8.8
enqueue 12,430 19.05 2 4.3
PX Deq: Join ACK 11,683 17.82 2 4.0
2 Recommendations:
You have high latch free waits of 105.7 per transaction. The latch free wait occurs when the process is waiting for a latch held by another process. Check the later section for the specific latch waits. Latch free waits are usually due to SQL without bind variables, but buffer chains and redo generation can also cause them.
You have excessive enqueue wait times with 4.3 per transaction. Oracle locks protect shared resources and allow access to those resources via a queuing mechanism. A large amount of time spent waiting for enqueue events can be caused by various problems, such as waiting for individual row locks or waiting for exclusive locks on a table. Ensure that you are using locally-managed tables (if you see enqueue ST waits) and review your settings for INITRANS and MAXTRANS n(if you have enqueue TX waits). If you see enqueue TX waits, check for DML locks and ensure that all foreign keys are indexed.
Instance Activity Stats
Statistic Total per Second per Trans
SQL*Net roundtrips to/from client 127,938 169.5 43.8
consistent gets 5,283,187 6,997.6 1,809.3
db block changes 131,348 174.0 45.0
execute count 96,440 127.7 33.0
parse count (hard) 123 0.2 0.0
parse count (total) 91,360 121.0 31.3
physical reads 40,340 53.4 13.8
physical reads direct 23,064 30.6 7.9
physical writes 6,266 8.3 2.2
physical writes direct 0 0.0 0.0
redo writes 3,711 4.9 1.3
session cursor cache hits 57,953 76.8 19.9
sorts (disk) 0 0.0 0.0
sorts (memory) 46,466 61.5 15.9
table fetch continued row 351,395 465.4 120.3
table scans (long tables) 16 0.0 0.0
table scans (short tables) 7,869 10.4 2.7
4 Recommendations:
You have high network activity with 169.5 SQL*Net roundtrips to/from client per second, which is a high amount of traffic. Review your application to reduce the number of calls to Oracle by encapsulating data requests into larger pieces (i.e. make a single SQL request to populate all online screen items). In addition, check your application to see if it might benefit from bulk collection by using PL/SQL "forall" or "bulk collect" operators.
You have high update activity with 174.0 db block changes per second. The DB block changes are a rough indication of total database work. This statistic indicates (on a per-transaction level) the rate at which buffers are being dirtied and you may want to optimize your database writer (DBWR) process. You can determine which sessions and SQL statements have the highest db block changes by querying the v$session and v$sessatst views.
You have 351,395 table fetch continued row actions during this period. Migrated/chained rows always cause double the I/O for a row fetch and "table fetch continued row" (chained row fetch) happens when we fetch BLOB/CLOB columns (if the avg_row_len > db_block_size), when we have tables with > 255 columns, and when PCTFREE is too small. You may need to reorganize the affected tables with the dbms_redefintion utility and re-set your PCTFREE parameters to prevent future row chaining.
You have high small table full-table scans, at 10.4 per second. Verify that your KEEP pool is sized properly to cache frequently referenced tables and indexes. Moving frequently-referenced tables and indexes to SSD will significantly increase the speed of small-table full-table scans.
Latch Activity
Latch Get Requests % Get Miss % NoWait Miss Wait Time (s)
Active checkpoint queue latch 3,436 0.2
enqueue hash chains 189,799 1.5
enqueues 259,386 0.5
error message lists 26,240 2.1
latch wait list 265,226 1.8 1.8
library cache 8,306,420 55.7 0.0
parallel query alloc buffer 118,214 3.7
parallel query stats 19,094 28.9
process queue 52,278 1.6
query server freelists 55,156 1.5
redo writing 20,881 0.6
session allocation 108,074 8.9
shared pool 150,109 1.3
1 Recommendations:
You have high library cache waits with 55.7% get miss. Consider pinning your frequently-used packages in the library cache with dbms_shared_pool.keep.
Init.ora Parameters
Parameter Value
db_block_buffers 170,036
db_block_size 16KB
db_file_multiblock_read_count 8
log_archive_start true
optimizer_mode first_rows
oracle_trace_collection_path /u101/app/oracle/admin/bfip2/otra
oracle_trace_facility_path ?/rdbms/admin
parallel_automatic_tuning false
query_rewrite_enabled true
session_cached_cursors 100
shared_pool_size 300MB
sort_area_size 60MB
optimizercost_model undefined
cursor_sharing exact
3 Recommendations:
You are not using your KEEP pool to cache frequently referenced tables and indexes. This may cause unnecessary I/O. When configured properly, the KEEP pool guarantees full caching of popular tables and indexes. Remember, an average buffer get is often 100 times faster than a disk read. Any table or index that consumes > 10% of the data buffer, or tables & indexes that have > 50% of their blocks residing in the data buffer should be cached into the KEEP pool. You can fully automate this process using scripts.
Consider setting your optimizer_index_caching parameter to assist the cost-based optimizer. Set the value of optimizer_index_caching to the average percentage of index segments in the data buffer at any time, which you can estimate from the v$bh view.
Your shared pool is set at 300MB, which is an unusually large value. Allocating excessive shared pool resource can adversely impact Oracle performance. For further details, see the shared pool advisory.
If the statspack analyzer is unable to process your report, please email your statspack file, and we will reply with your results.
Copyright 2007 by StatspackAnalyzer.com. All Rights Reserved.