disks do not auto-failover; must metaset take first
795792Dec 10 2008 — edited Dec 11 2008Hello,
In Solaris 10, Cluster 3.2, two node:local zone cluster.
The SAN disks do not auto-failover when I do a switch on the resource group. Instead, to fail over, I must:
- clrg online fails
- manually mount the SAN disks to the node
- clrg online success
To fail over to other node, must
- clrg switch [2nd node] fails with messages:
# Dec 10 11:03:35 pluto SC[SUNW.HAStoragePlus:6,ftp4-com-rg,ftp4-data-has,hastorageplus_prenet_start]: Mount of /zones/ftp_zone/root/ftp_user_data failed: (33) mount: /zones/ftp_zone/root/ftp_user_data: Device busy
Dec 10 11:03:35 pluto SC[SUNW.HAStoragePlus:6,ftp4-com-rg,ftp4-data-has,hastorageplus_prenet_start]: Failed to mount: /zones/ftp_zone/root/ftp_user_data in global or local zone
- so now the rg is offine
- [on 2nd node] metaset -s [setname] -t so 2nd node becomes owner
- [on 2nd node] manually mount the disk
- clrg switch to 2nd node is successful
So appears the rg is not failing over b/c metaset is owned by one node and doesn't release them so the other node can take ownership. Do I need to do a metaset -r [release] or metaset -A enable ??
vfstab has been updated:
/dev/md/ftpbin/dsk/d110 /dev/md/ftpbin/rdsk/d110 /zones/ftp_zone/root/mysql_data ufs 1 no logging
/dev/md/ftpdata/dsk/d111 /dev/md/ftpdata/rdsk/d111 /zones/ftp_zone/root/ftp_user_data ufs 1 no logging
jupiter /etc # metaset
Set name = ftpbin, Set number = 1
Host Owner
jupiter Yes
pluto
Driv Dbase
d2 Yes
Set name = ftpdata, Set number = 2
Host Owner
jupiter Yes
pluto
Driv Dbase
d3 Yes
Edited by: onetree on Dec 10, 2008 10:02 PM