ACFS volume shows up as disabled after reboot or crs stop / start.
Hello,
I'm running on a two-node RAC cluster OEL6 using Enterprise Edition 11.2.0.3. All my PSUs are up to date.
I've created the ACFS filesystem following the steps carefully (or so I thought) only to have it vanish after a reboot. Support had me reinstall the ACFS libraries and that brought me up to the point where the underlying volume was in a DISABLED state. After enabling and mounting the filesystem, all was good. I then tried shutting down the cluster on one node 'crsctl stop crs' and starting it. The volume came up as DISABLED.
ASMCMD> volinfo -a
Diskgroup Name: FS_ORCL_CFS0195
Volume Name: INSRACD_VG
Volume Device: /dev/asm/insracd_vg-107
State: DISABLED
Size (MB): 5120000
Resize Unit (MB): 32
Redundancy: UNPROT
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: /insracd
oracle@lmmk87:/fisc/oracle> /sbin/acfsutil registry
Mount Object:
Device: /dev/asm/insracd_vg-107
Mount Point: /insracd
Disk Group: FS_ORCL_CFS0195
Volume: INSRACD_VG
Options: none
Nodes: lmmk87,lmmk88
oracle@lmmk87:/fisc/oracle>
This required an umount of the filesystem, an ENABLE from asmcmd and then a mount to get it back.
How can I automate this so that the volumes return as enabled and the filesystem exists without intervention? I thought the fact that I registered them ...
/sbin/acfsutil registry -a -f -n lmmk87,lmmk88/dev/asm/insracd_vg-107 /insracd
What would take care of this situation? I looked into using the srvctl add filesystem, but that seems more geared toward when using ACFS for a shared ORACLE_HOME and creating a CRS dependency -- we're not doing that. I'm appreciative of this forum and I hope I targeted this question in the correct place. I'll try to add some ASM log information on this.
Thanks,
Malcolm.