Skip to Main Content

Infrastructure Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

cannot remove raid1 array (mdadm)

Dude!Nov 7 2012 — edited Nov 8 2012
Hi,

The system is Oracle Linux 6.3 x86_64 running under VirtualBox. In order to remove the Linux RAID root volume I have started the system using the OL 6.3 UEK boot CD.

For some reason I'm unable to remove the /dev/md127 RAID device. Please see below:

<pre>
# mdadm -As
mdadm: /dev/md/vm009.example.com:1 has been started with 2 drives.

# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear]
md127 : active raid1 sda2 [0] sdb2 [1]
20457404 blocks super 1.1 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sda1[0] sdb1[1]
511988 blocks super 1.0 [2/2] [UU]
</pre>

My first idea was to stop the device and remove it, which does not work:

<pre>
# *mdadm --stop /dev/md127*
mdadm: stopped /dev/md127

# *mdadm --remove /dev/md127*
mdadm: error opening /dev/md127: No such file or directory
</pre>

However, there is no error when I do not stop /dev/md127. But also the device is not removed:

<pre>
# mdadm -As
mdadm: /dev/md/vm009.example.com:1 has been started with 2 drives.

# *mdadm --remove /dev/md127*

# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear]
md127 : active raid1 sda2 [0] sdb2 [1]
20457404 blocks super 1.1 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sda1[0] sdb1[1]
511988 blocks super 1.0 [2/2] [UU]
</pre>

I thought about to use -zero-superblock:

<pre>
# *mdadm --stop /dev/md127*
mdadm: stopped /dev/md127

# *mdadm --zero-superblock /dev/sdb2*

# mdadm -As
mdadm: /dev/md/vm009.example.com:1 has been started with 1 drive (out of 2).

# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear]
md127 : active raid1 sda2 [0]
20457404 blocks super 1.1 [2/1] [U_]
bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sda1[0] sdb1[1]
511988 blocks super 1.0 [2/2] [UU]
</pre>

However, if I do the same for the other drive, then I loose the complete enchilada.

<pre>
# *mdadm --stop /dev/md127*
mdadm: stopped /dev/md127

# *mdadm --zero-superblock /dev/sda2*

# mdadm -As
mdadm: No arrays found in config file or automatically
</pre>

Here is the detail of the /dev/md127 device:

<pre>
# mdadm --detail /dev/md127
/dev/md127:
Version : 1.1
Creation Time : Wed Nov 7 01:44:42 2012
Raid Level : raid1
Array Size : 20457404 (19.51 GiB 20.95 GB)
Used Dev Size : 20457404 (19.51 GiB 20.95 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Thu Nov 8 01:23:00 2012
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : vm009.example.com:1
UUID : 006a3ad6:c0590b94:5fddf632:5704f209
Events : 70

Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
</pre>

Any ideas what I might be doing wrong? Thanks!
Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Dec 6 2012
Added on Nov 7 2012
1 comment
2,441 views