CRS-1019 Resource error, change application host?
202099Jan 2 2007 — edited Jan 3 2007I'm new to the RAC environment and have inherited a Windows setup with 2 servers (rac1 & rac2), 1 database (leads), and 1 instance on each server (leads1 & leads2). When I try to start the offline applications shown in crs_stat, I get the error message CRS-1019 Resource ora.rac2.ons (application) cannot run on rac1
I can see that "ora.rac2.ons" should have a host of rac2 (not rac1). Is there any way to change the host for an application name without having to remove the second node a re-add it? This is a "sand-box" system so I have freedom to change config but I don't want to mess up the config so bad that I have to start over right now. At some point when I understand more I will take that on as an exercise but for now I am just trying to learn.
C:\>crs_stat -t
Name Type Target State Host
------------------------------------------------------
ora.leads.db application ONLINE ONLINE rac1
ora....s1.inst application ONLINE ONLINE rac1
ora....s2.inst application ONLINE OFFLINE
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE OFFLINE
ora....C2.lsnr application ONLINE OFFLINE
ora....C2.lsnr application ONLINE OFFLINE
ora.rac2.gsd application ONLINE OFFLINE
ora.rac2.ons application ONLINE OFFLINE
ora.rac2.vip application ONLINE ONLINE rac1
C:\>crs_start ora.rac2.ons
rac1 : CRS-1019: Resource ora.rac2.ons (application) cannot run on rac1
CRS-0223: Resource ora.rac2.ons has placement error.
I get the same error for gsd, asm, etc.
I also get the following error from srvctl:
C:\>srvctl enable asm -n rac2 -i leads2
PRKS-1035 : Failed to retrieve ORACLE_HOME value for ASM instance "leads2" on node "rac2" from cluster registry, [PRKS-1028 : Configuration for ASM instance "leads2" on node "rac2" does not exist in cluster registry.]
[PRKS-1028 : Configuration for ASM instance "leads2" on node "rac2" does not e
xist in cluster registry.]
And the following error from cluvfy:
D:\oracle\product\10.1.0\db_1\CVU\bin>cluvfy comp ssa -n all -verbose
Verifying shared storage accessibility
WARNING:
User equivalence is not set for nodes:
rac2
Verification will proceed with nodes:
rac1
Checking shared storage accessibility...
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk0\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk1\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk2\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk3\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk4\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk5\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk6\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk7\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk8\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk9\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk10\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
\Device\Harddisk11\Partition1 rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
OCRCFG rac1
Disk Partition Sharing Nodes (1 in count)
------------------------------------ ------------------------
VOTEDSK rac1
Shared storage check was successful on nodes "rac1".
Verification of shared storage accessibility was unsuccessful.
Checks did not pass for the following node(s):
rac2
*************************************************************************
Perhaps the first error about resource host is related to these last two errors.
Which error should I try to tackle first?
Thanks for your help. I've painfully searched the RAC admin, Cluster RAC admin, deployment, installation, and best practices documents but I seem to be going in circles with those regarding these three errors.