Skip to Main Content

Infrastructure Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

multipath does not persist accross reboots

3502190May 22 2018 — edited May 23 2018

Hello

We have an issue with a backup server.

We have created two multipath deviced mpathbb and mpathbc

and we have create a raid 1 using mdadm like this:

mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/mapper/mpathbbp1 /dev/mapper/mpathbcp1

everything was fine the raid was OK and everything

but we had to reboot the device and all the multipath are gone and the md devices are faulty.

mdadm --detail /dev/md1

/dev/md1:

        Version : 1.2

  Creation Time : Wed May 16 11:33:11 2018

     Raid Level : raid1

     Array Size : 418211584 (398.84 GiB 428.25 GB)

  Used Dev Size : 418211584 (398.84 GiB 428.25 GB)

   Raid Devices : 2

  Total Devices : 2

    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue May 22 11:52:06 2018

          State : active, degraded

Active Devices : 1

Working Devices : 1

Failed Devices : 1

  Spare Devices : 0

           Name : endor:1  (local to host endor)

           UUID : 23999c17:96087b94:0d04bed5:43af255e

         Events : 120363

    Number   Major   Minor   RaidDevice State

       0       0        0        0      removed

       1       8      241        1      active sync   /dev/sdp1

       0      65       49        -      faulty   /dev/sdt1

trying to rediscover the multipaths gives the errors:

multipath -r -v 2

May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:33 | mpathbc: ignoring map

May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:33 | mpathbc: ignoring map

May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:33 | mpathbc: ignoring map

May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:34 | mpathbc: ignoring map

May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:34 | mpathbb: ignoring map

May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:34 | mpathbb: ignoring map

May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:34 | mpathbb: ignoring map

May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:34 | mpathbb: ignoring map

in the dmesg we see the following:

device-mapper: table: 252:15: multipath: error getting device

device-mapper: ioctl: error adding target to table

device-mapper: table: 252:15: multipath: error getting device

device-mapper: ioctl: error adding target to table

device-mapper: table: 252:15: multipath: error getting device

device-mapper: ioctl: error adding target to table

device-mapper: table: 252:15: multipath: error getting device

device-mapper: ioctl: error adding target to table

any idea how to recover the multipath and fixe the raid that appears faulty but really it isn't and how to prevent this to happen with reboots?

thanks

Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Jun 20 2018
Added on May 22 2018
11 comments
1,062 views