hi bro, I have this problem when I install oracle grid。
the error logs blow
INFO: Checking OCR integrity...
INFO: Checking the absence of a non-clustered configuration...
INFO: All nodes free of non-clustered, local-only configurations
INFO: ERROR:
INFO: PRVF-4193 : Asm is not running on the following nodes. Proceeding with the remaining nodes.
INFO: Check failed on nodes:
INFO: rac2
INFO: Checking OCR config file "/etc/oracle/ocr.loc"...
INFO: OCR config file "/etc/oracle/ocr.loc" check successful
INFO: ERROR:
INFO: PRVF-4195 : Disk group for ocr location "+OCDCR" not available on the following nodes:
INFO: Check failed on nodes:
INFO: rac2
INFO: Checking size of the OCR location "+OCDCR" ...
INFO: Size check for OCR location "+OCDCR" successful...
INFO: OCR integrity check failed
INFO: Checking CRS integrity...
INFO: ERROR:
INFO: PRVF-5316 : Failed to retrieve version of CRS installed on node "rac2"
INFO: CRS integrity check failed
INFO: Checking node application existence...
INFO: Checking existence of VIP node application (required)
INFO: Check failed.
INFO: Check failed on nodes:
INFO: rac2,rac1
INFO: Checking existence of ONS node application (optional)
INFO: Check ignored.
INFO: Check failed on nodes:
INFO: rac2
INFO: Checking existence of GSD node application (optional)
INFO: Check ignored.
INFO: Check failed on nodes:
INFO: rac2
INFO: Checking existence of EONS node application (optional)
INFO: Check ignored.
INFO: Check failed on nodes:
INFO: rac2
INFO: Checking existence of NETWORK node application (optional)
INFO: Check ignored.
INFO: Check failed on nodes:
INFO: rac2
INFO: Checking Single Client Access Name (SCAN)...
INFO: Checking name resolution setup for "scanip"...
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scanip"
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "scanip" (IP address: 192.168.192.13) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scanip"
INFO: Verification of SCAN VIP and Listener setup failed
INFO: OCR detected on ASM. Running ACFS Integrity checks...
INFO: Starting check to see if ASM is running on all cluster nodes...
INFO: PRVF-5137 : Failure while checking ASM status on node "rac2"
INFO: Starting Disk Groups check to see if at least one Disk Group configured...
INFO: Disk Group Check passed. At least one Disk Group configured
INFO: Task ACFS Integrity check failed
INFO: Checking Oracle Cluster Voting Disk configuration...
INFO: Oracle Cluster Voting Disk configuration check passed
INFO: User "grid" is not part of "root" group. Check passed
INFO: Checking if Clusterware is installed on all nodes...
INFO: Check of Clusterware install passed
INFO: Checking if CTSS Resource is running on all nodes...
INFO: CTSS resource check passed
INFO: Querying CTSS for time offset on all nodes...
INFO: Query of CTSS for time offset passed
INFO: Check CTSS state started...
INFO: CTSS is in Observer state. Switching over to clock synchronization checks using NTP
INFO: Starting Clock synchronization checks using Network Time Protocol(NTP)...
INFO: NTP Configuration file check started...
INFO: NTP Configuration file check passed
INFO: Checking daemon liveness...
INFO: Liveness check passed for "ntpd"
INFO: NTP daemon slewing option check passed
INFO: NTP daemon's boot time configuration check for slewing option passed
INFO: NTP common Time Server Check started...
INFO: PRVF-5408 : NTP Time Server ".LOCL." is common only to the following nodes "rac1"
INFO: PRVF-5416 : Query of NTP daemon failed on all nodes
INFO: Clock synchronization check using Network Time Protocol(NTP) passed
INFO: Oracle Cluster Time Synchronization Services check passed
INFO: Post-check for cluster services setup was unsuccessful on all the nodes.
INFO:
WARNING:
INFO: Completed Plugin named: Oracle Cluster Verification Utility
I use the rac1 as ntp server,so I think ntp is right
[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Pre-check for cluster services setup was successful.
/etc/hosts
[grid@rac1 grid]$ cat /etc/hosts
127.0.0.1 localhost
#::1 localhost
#public eth0
192.168.192.169 rac1
192.168.192.170 rac2
#private eth1
10.0.0.1 rac1-priv
10.0.0.2 rac2-priv
#virtual ip
192.168.192.11 rac1-vip
192.168.192.12 rac2-vip
#scanip
192.168.192.13 scanip