Hi Champs
I'm having trouble in respect of adding new ASM disk on 2 Node RAC. After successfully adding 100G disk on the first node, scanning on other node consider 2 existing disks stale and removed them unexpectedly, which further caused of database block corruption. See below O/p
[root@eul2563 mapper]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Cleaning disk "DATA_DISK10"
Cleaning disk "DATA_DISK9"
Scanning system for ASM disks...
Instantiating disk "DATA_DISK39"
As a work around I've restarted multipath service on this node and the disk were added back in ASM automatically. Further removed the block corruption as well.. Now need to find the RCA "How a running system marked disk stale?", so that same won't repeat on production systems..
Utilities : kpartx, multipath, oracleasm
Attachments : Sequence of Commands, ASM Alert Log Node 1, ASM Alert Log Node 2, Syslog Node 1, Syslog Node 2
Need your expertise.