Dropping DB Blocks/OS PAckets Using dNFS
Hello.
We have a 6-node 11g RAC DB running on Linux Red Hat 4.6 (64-bit). We are using a NetApp DB storage device. We have had dNFS enabled since Day 1. We using 'ofconfig', we see packet drops on our eth2 & eth3 NICs which are tied into our dNFS configuration. We even packete dropped on our intercennects. I realize that the interconnect NICs have no relation to the data filer NICs. However, we have seen the following % of ppacket drops:
eth1 (interconnect) - 59K dropped out of 11.5M Rx packets
eth2 (filer dNFS connect #1) - 215K packets droppped out of 41M packets
eth3 (filer dNFS connect #2) - 0 dropped out of 17K Rx packets
Is it ok to see OS packets dropped like this. I cannot determine which of these packets are DB-related and which are not. It is also strange to see the eth3 getting such a low volume of IO traffic (and, hence, it is not dropping any packets).
We also get the following 'DB-centric' #s when querying gv$sysstat (you can see the global cache blocks lost messages intermittently in the single digits).
(note: no blocks are showing up as corrupted):
Node GC Activity # Occurrences
------ ----------------------------------------------- ----------------------
1 gc blocks lost 0
2 gc blocks lost 0
3 gc blocks lost 5
4 gc blocks lost 0
5 gc blocks lost 1
6 gc blocks lost 2
These #s have been larger, but are down at the moment. While I did not include it in the output, the gc current blocks received value for one of the node #6 is 181K. Maybe I need to look at a different column for this view?
The SAs believe that Oracle's dNFs is the problem, so I disabled it a few days agao (i. e., removed the orafstab from $ORACLE_HOME/dbs) and we are still seeing OS packets lost, "Global Cache Blocks lost" messages, and the 'gc blocks lost' per above output . I am assuinging I don't have to bounce the DB for dNFS to be diabaled as the documentation doesn't state to do so.
Any help is appreciated. Thanks.
Matt