long post. Investigate what is causing the high %w and %b for d0 device
807559Aug 18 2006 — edited Aug 30 2006Sorry for the long/double post. I posted also on google. I need to investigate what is causing the high %w and %b for d0 device
from iostat output on Solaris 8, SunFire V240 dual 1.5GHz , 8GB memory,
Veritas VCS, VxVM, VxFS 4.1 MP1, 2 X 72GB internal disks + EMC CX300 storage
Eg: Following shows 64 writes/sec and each write is ~8kbytes/sec but
that is causing %w of 85 and %b of 64
64 ops/sec and a total of 8k/sec write bandwidth is not much for any reasonably
modern disk. I'm trying to find out what could be wrongly configured.
/ is using ufs without logging (default Sol8 setting in vfstab). I read about ufs vs vxfs but there is no difference if the server
and number of ops is in starters category.
But I'm concerned about the high %busy for d0. Could it be a misconfiguration?
How do I determine what FS is created using c0t0d0s5?
vxprint -ht nor /etc/vfstab does not have c0t0d0s5 or c0t0d0
only in /etc/device.tab
device.tab:disk1:/dev/rdsk/c0t0d0s2:/dev/dsk/c0t0d0s2::desc="Disk
Drive" type="disk" part="true" removable="false" capacity="213696"
dpartlist="dpart100,dpart102,dpart103,dpart105,dpart106,dpart107"
device.tab:dpart100:/dev/rdsk/c0t0d0s0:/dev/dsk/c0t0d0s0::desc="Disk
Partition" type="dpart" removable="false" capacity="20484288"
dparttype="fs" fstype="ufs" mountpt="/"
device.tab:dpart103:/dev/rdsk/c0t0d0s3:/dev/dsk/c0t0d0s3::desc="Disk
Partition" type="dpart" removable="false" capacity="20484288"
dparttype="fs" fstype="ufs" mountpt="/var"
device.tab:dpart105:/dev/rdsk/c0t0d0s5:/dev/dsk/c0t0d0s5::desc="Disk
Partition" type="dpart" removable="false" capacity="93751488"
dparttype="fs" fstype="ufs" mountpt="/usr/local"
I getting periodic 5-10sec upto 50% iow every 15-20minutes. Here is iostat output captures when %idle is low.
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 64.0 0.0 505.7 0.9 2.5 13.3 38.5 85 64 d0
0.0 64.0 0.0 505.7 0.0 2.3 0.0 35.7 0 62 d10
0.0 64.0 0.0 505.7 0.0 1.4 0.0 21.3 0 47 d20
0.5 130.9 0.1 539.1 0.0 3.9 0.0 29.6 0 87 c0t0d0
0.5 64.0 0.1 505.7 0.0 1.4 0.0 21.2 0 47 c0t1d0
Idle 35 , IOW 10 Fri Aug 11 16:15:05 EDT 2006
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 144.6 0.0 1160.4 0.0 2.5 0.0 17.6 0 100 d0
0.0 145.1 0.0 1164.4 0.0 2.1 0.0 14.2 0 98 d10
0.0 144.6 0.0 1160.4 0.0 1.8 0.0 12.2 0 96 d20
0.0 145.1 0.0 1164.4 0.0 2.1 0.0 14.2 0 98 c0t0d0
0.0 144.6 0.0 1160.4 0.0 1.8 0.0 12.1 0 96 c0t1d0
0.0 2.5 0.0 20.2 0.0 0.0 0.0 0.6 0 0 c6t19d5
0.0 2.0 0.0 16.0 0.0 0.0 0.0 0.7 0 0 c6t19d25
0.0 2.0 0.0 12.5 0.0 0.0 0.0 0.8 0 0 c6t20d26
Idle 53 , IOW 25 Fri Aug 11 19:01:19 EDT 2006
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 72.4 0.0 579.3 1.0 2.7 13.3 37.3 96 74 d0
0.0 72.4 0.0 579.4 0.0 2.5 0.0 34.7 0 73 d10
0.0 74.4 0.0 595.3 0.0 1.4 0.0 19.4 0 57 d20
0.0 146.3 0.0 616.3 0.0 4.3 0.0 29.5 0 96 c0t0d0
0.0 74.4 0.0 595.3 0.0 1.4 0.0 19.4 0 57 c0t1d0
0.0 1.0 0.0 2.7 0.0 0.0 0.0 0.8 0 0 c6t19d5
0.0 0.5 0.0 4.0 0.0 0.0 0.0 6.6 0 0 c6t19d27
0.0 1.5 0.0 8.2 0.0 0.0 0.0 3.9 0 1 c6t20d26
Idle 46 , IOW 48 Tue Aug 15 14:29:09 EDT 2006
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 35.6 0.0 284.7 0.6 1.6 16.5 45.5 59 44 d0
0.0 35.6 0.0 284.7 0.0 1.6 0.0 44.3 0 43 d10
0.0 34.9 0.0 279.1 0.0 0.5 0.0 14.1 0 29 d20
0.2 95.0 0.0 314.4 0.0 2.7 0.0 28.0 0 59 c0t0d0
0.2 35.6 0.0 284.7 0.0 1.6 0.0 44.1 0 43 c0t0d0s0
0.0 59.4 0.0 29.7 0.0 1.1 0.0 18.3 0 59 c0t0d0s5
0.2 34.9 0.0 279.1 0.0 0.5 0.0 14.1 0 29 c0t1d0
0.2 34.9 0.0 279.1 0.0 0.5 0.0 14.1 0 29 c0t1d0s0
Idle 38 , IOW 0 Tue Aug 15 16:45:09 EDT 2006
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 19.6 0.0 154.9 0.3 0.8 13.6 38.7 27 22 d0
0.0 19.6 0.0 154.9 0.0 0.7 0.0 37.5 0 21 d10
0.0 19.6 0.0 154.9 0.0 0.3 0.0 15.3 0 15 d20
0.2 44.0 0.0 167.1 0.0 1.2 0.0 27.8 0 27 c0t0d0
0.2 19.6 0.0 154.9 0.0 0.7 0.0 37.1 0 21 c0t0d0s0
0.0 24.4 0.0 12.2 0.0 0.5 0.0 20.2 0 27 c0t0d0s5
0.2 19.6 0.0 154.9 0.0 0.3 0.0 15.2 0 15 c0t1d0
0.2 19.6 0.0 154.9 0.0 0.3 0.0 15.2 0 15 c0t1d0s0
ptree on running processes during high IOW of 25% eg:
2079 /opt/VRTSvcs/bin/Mount/MountAgent -type Mount
17089 /opt/VRTSperl/bin/perl -S /opt/VRTSvcs/bin/Mount/monitor mnt_name /mymntpt/
17100 sh -c /opt/VRTSvcs/bin/Mount/Mountmonitor
17101 /opt/VRTSvcs/bin/Mount/Mountmonitor
$ metastat -p
d0 -m d10 d20 1
d10 1 1 c0t0d0s0
d20 1 1 c0t1d0s0
d1 -m d11 d21 1
d11 1 1 c0t0d0s1
d21 1 1 c0t1d0s1
$ metastat
d0: Mirror
Submirror 0: d10
State: Okay
Submirror 1: d20
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 111152448 blocks
d10: Submirror of d0
State: Okay
Size: 111152448 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c0t0d0s0 0 No Okay
d20: Submirror of d0
State: Okay
Size: 111152448 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c0t1d0s0 0 No Okay
d1: Mirror
Submirror 0: d11
State: Okay
Submirror 1: d21
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 8395200 blocks
d11: Submirror of d1
State: Okay
Size: 8395200 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c0t0d0s1 0 No Okay
d21: Submirror of d1
State: Okay
Size: 8395200 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c0t1d0s1 0 No Okay
Here is listing of /etc/system:
set nfssrv:nfs_portmon=1
set noexec_user_stack_log=1
set noexec_user_stack=1
* vxvm_START (do not remove)
forceload: drv/vxdmp
forceload: drv/vxio
forceload: drv/vxspec
* vxvm_END (do not remove)
* Begin MDD root info (do not edit)
forceload: misc/md_trans
forceload: misc/md_raid
forceload: misc/md_hotspares
forceload: misc/md_sp
forceload: misc/md_stripe
forceload: misc/md_mirror
forceload: drv/pcisch
forceload: drv/glm
forceload: drv/sd
rootdev:/pseudo/md@0:0,0,blk
* End MDD root info (do not edit)
* vxfs_START -- do not remove the following lines:
* VxFS requires a stack size greater than the default 8K.
* The following value allows the kernel stack size to be
* increased to 24K.
set lwp_default_stksize=0x6000
* vxfs_END
set sd:sd_io_time=0x78
set sd:sd_max_throttle=32
set scsi_options=0x7f8
set rlim_fd_cur=1024
set rlim_fd_max=2048
set shmsys:shminfo_shmmax=536870912
set shmsys:shminfo_shmseg=1024
set shmmin:shminfo_shmmin=1
set shmsys:shminfo_shmmni=1024
set semsys:seminfo_semmni=1024
set semsys:seminfo_semaem=16384
set semsys:seminfo_semvmx=32767
set semsys:seminfo_semmap=1026
set semsys:seminfo_semmns=16384
set semsys:seminfo_semmsl=100
set semsys:seminfo_semopm=100
set semsys:seminfo_semmnu=2048
set semsys:seminfo_semume=256
set msgsys:msginfo_msgmni=50
set msgsys:msginfo_msgmap=1026
set msgsys:msginfo_msgmax=4096
set msgsys:msginfo_msgmnb=4096
* vxfs_START -- do not remove the following lines:
* VxFS requires a stack size greater than the default 8K.
* The following value allows the kernel stack size to be
* increased to 24K.
set rpcmod:svc_default_stksize=0x6000
* vxfs_END
* Begin MDD database info (do not edit)
set md:mddb_bootlist1="sd:485:16 sd:485:1050 sd:493:16 sd:493:1050"
* End MDD database info (do not edit)
* C2 Logging RFC 136898
set abort_enable=1
set c2audit:audit_load = 1
set abort_enable = 0
swap -l
swapfile dev swaplo blocks free
/dev/md/dsk/d1 85,1 16 8395184 8395184
df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d0 54736766 14238603 39950796 27% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 8736904 16 8736888 1% /var/run
dmpfs 8736888 0 8736888 0% /dev/vx/dmp
dmpfs 8736888 0 8736888 0% /dev/vx/rdmp
swap 8738240 1352 8736888 1% /tmp
mpstat 1 3 & vmstat 1 3 outputs also. Noticed that when CPU idle goes down, syscl is in range of 10thousands
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr m0 m1 m1 m1 in sy cs us sy id
0 0 0 9140136 5931944 369 1565 43 35 34 0 0 5 0 5 0 477 3090 1947 7 8 86
0 0 0 8867176 5450560 432 1028 0 87 87 0 0 0 0 0 0 421 49743 11914 24 16 60
0 0 0 8869392 5453376 748 3013 0 16 16 0 0 3 0 3 0 577 46593 11106 23 21 55
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 739 0 825 218 168 918 42 175 74 0 1622 6 8 3 83
1 825 0 1170 358 230 1028 37 175 90 0 1467 7 7 4 81
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 943 0 245 91 27 3193 46 170 16 0 12301 10 16 0 74
1 755 0 424 389 266 3148 38 172 45 0 11072 14 14 1 71
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 28 47 21 548 7 123 10 0 500 0 1 0 99
1 0 0 21 336 221 527 2 124 25 0 724 0 0 0 100
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr m0 m1 m1 m1 in sy cs us sy id
0 0 0 9140128 5931936 369 1565 43 35 34 0 0 5 0 5 0 477 3090 1947 7 8 86
0 0 0 8864480 5451752 62 613 0 16 8 0 0 3 0 3 0 358 3491 1288 2 6 92
0 0 0 8864480 5451344 83 156 8 63 63 0 0 0 0 0 0 361 19037 1267 6 9 85
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 739 0 825 218 168 918 42 175 74 0 1622 6 8 3 83
1 825 0 1170 358 230 1028 37 175 90 0 1467 7 7 4 81
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 640 0 217 186 134 665 40 187 425 0 7347 4 8 4 84
1 352 0 531 374 253 843 33 199 272 0 4355 7 5 4 84
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 100 0 88 91 42 1533 36 173 19 0 3948 4 4 0 92
1 281 0 170 365 245 552 14 158 25 0 3155 7 2 1 90
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr m0 m1 m1 m1 in sy cs us sy id
0 0 0 9140128 5931936 369 1565 43 35 34 0 0 5 0 5 0 477 3090 1947 7 8 86
0 0 0 8862328 5449944 464 1604 0 269 269 0 0 11 0 11 0 529 8730 1378 13 5 82
0 0 0 8866144 5452120 59 289 0 79 40 0 0 39 0 39 0 750 10236 1180 2 2 96
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 739 0 825 218 168 918 42 175 74 0 1622 6 8 3 83
1 825 0 1170 358 230 1028 37 175 90 0 1467 7 7 4 81
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 58 0 4629 57 22 591 23 153 19 0 6301 1 8 0 91
1 650 0 9483 340 225 524 25 143 30 0 13834 10 13 0 77
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 134 0 52 123 88 746 21 142 125 0 2046 1 4 2 93
1 338 0 156 349 227 652 14 163 180 0 4254 4 5 2 89
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr m0 m1 m1 m1 in sy cs us sy id
0 0 0 9140128 5931936 369 1565 43 35 34 0 0 5 0 5 0 477 3090 1947 7 8 86
8 0 0 8852312 5438200 899 5846 0 40 40 0 0 2 0 1 0 546 33409 2211 44 17 39
4 0 0 8860736 5446328 717 7738 0 24 24 0 0 1 0 1 0 557 35240 1983 70 21 9
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 739 0 825 218 168 918 42 175 74 0 1622 6 8 3 83
1 825 0 1170 358 230 1028 37 175 90 0 1467 7 7 4 81
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 660 0 8262 124 40 633 60 178 29 0 10076 11 23 0 67
1 838 0 7907 350 216 704 39 175 33 0 10647 6 16 0 78
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 21 105 63 645 33 175 125 0 1940 0 4 0 96
1 32 0 92 340 223 649 28 169 134 0 2090 1 1 1 97
Idle 42 , IOW 3 Tue Aug 15 17:40:14 EDT 2006
Thank you.