Hi all
I've a solaris 10 system, which have a faulty disk.
If I'm going to analyze the explorer manually, I'm able to obtain this issue:
bash-3.2$# more ./messages/messages | grep c0t1d0
Oct 21 04:19:54 hostname [kern.warning] WARNING: md: d8: write error on /dev/dsk/c0t1d0s3
Oct 21 04:20:55 hostname [kern.warning] WARNING: md: d8: /dev/dsk/c0t1d0s3 needs maintenance
Oct 21 04:25:45 hostname [kern.warning] WARNING: md: d17: write error on /dev/dsk/c0t1d0s6
Oct 21 04:25:45 hostname [kern.warning] WARNING: md: d17: /dev/dsk/c0t1d0s6 needs maintenance
Oct 21 04:33:47 hostname [kern.warning] WARNING: md: d2: write error on /dev/dsk/c0t1d0s0
Oct 21 04:33:47 hostname [kern.warning] WARNING: md: d2: /dev/dsk/c0t1d0s0 needs maintenance
Oct 21 04:46:04 hostname [kern.warning] WARNING: md: d11: write error on /dev/dsk/c0t1d0s4
Oct 21 04:46:22 hostname [kern.warning] WARNING: md: d11: write error on /dev/dsk/c0t1d0s4
Oct 21 04:46:22 hostname [kern.warning] WARNING: md: d11: /dev/dsk/c0t1d0s4 needs maintenance
Oct 21 04:46:53 hostname [kern.warning] WARNING: md: d14: /dev/dsk/c0t1d0s5 needs maintenance
Oct 21 04:49:42 hostname [kern.warning] WARNING: md: d5: /dev/dsk/c0t1d0s1 needs maintenance
Oct 21 04:54:42 hostname [kern.warning] WARNING: md d12: open error on /dev/dsk/c0t1d0s5
Oct 21 04:54:42 hostname [kern.warning] WARNING: md d3: open error on /dev/dsk/c0t1d0
bash-3.2$# more ./messages/dmesg.out | grep c0t1d0
Oct 21 04:49:42 hostname [kern.warning] WARNING: md: d5: /dev/dsk/c0t1d0s1 needs maintenance
Oct 21 04:54:42 hostname [kern.warning] WARNING: md d12: open error on /dev/dsk/c0t1d0s5
Oct 21 04:54:42 hostname [kern.warning] WARNING: md d3: open error on /dev/dsk/c0t1d0s1
bash-3.2$# more ./sysconfig/iostat-En.out | grep c0t1d0
c0t1d0 Soft Errors: 5 Hard Errors: 193 Transport Errors: 18
bash-3.2$# more ./disks/svm/metastat.out | grep c0t1d0
Invoke: metareplace d15 c0t1d0s6 <new device>
c0t1d0s6 0 No Maintenance Yes
Invoke: metareplace d9 c0t1d0s4 <new device>
c0t1d0s4 0 No Maintenance Yes
Invoke: metareplace d6 c0t1d0s3 <new device>
c0t1d0s3 0 No Maintenance Yes
Invoke: metareplace d3 c0t1d0s1 <new device>
c0t1d0s1 0 No Maintenance Yes
Invoke: metareplace d0 c0t1d0s0 <new device>
c0t1d0s0 0 No Maintenance Yes
Invoke: metareplace d12 c0t1d0s5 <new device>
c0t1d0s5 0 No Maintenance Yes
bash-3.2$# more ./disks/diskinfo | egrep "(Serial|c0t1d0)"
Location Vendor Product Rev Serial # Dual Port
c0t1d0 SEAGATE ST914603SSUN146G 0868 0123456789 primary
But if I'm going to analyze the auto-generated report from the ORAS/CLI-tool, I'm not able to obtain the faulty disk (just get info about metadb failures).
Is there any possibility to get more info about this issue from the ORAS/CLI report (like disk-name; HW-path etc.)?
================================================================================
= Failed Components =
================================================================================
SDS/SVM Status Details (metadb failures)
------------------------------------------
ERROR BLOCK FIRST
FOUND DEVICE FLAGS COUNT BLOCK MASTER
======== =========== ======== ======== ======== ========
Thank you & best regards,
m4rco