Skip to Main Content

Existing ASM Disk Reports Stale while Scanning on Node 2

Tech GeekJul 14 2017 — edited Aug 31 2017

Hi Champs

I'm having trouble in respect of adding new ASM disk on 2 Node RAC. After successfully adding 100G disk on the first node, scanning on other node consider 2 existing disks stale and removed them unexpectedly, which further caused of database block corruption. See below O/p

[root@eul2563 mapper]# /usr/sbin/oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks...

Cleaning disk "DATA_DISK10"

Cleaning disk "DATA_DISK9"

Scanning system for ASM disks...

Instantiating disk "DATA_DISK39"

As a work around I've restarted multipath service on this node and the disk were added back in ASM automatically. Further removed the block corruption as well.. Now need to find the RCA "How a running system marked disk stale?", so that same won't repeat on production systems..

Utilities : kpartx, multipath, oracleasm

Attachments : Sequence of Commands, ASM Alert Log Node 1,  ASM Alert Log Node 2, Syslog Node 1, Syslog Node 2

Need your expertise.

This post has been answered by Mohammad Ikram on Aug 31 2017
Jump to Answer
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked due to inactivity on Sep 28 2017
Added on Jul 14 2017