Skip to Main Content

Infrastructure Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Solaris 10 nfs and forcedirectio

1305030Jun 3 2014 — edited Jun 5 2014

Hi,

we're running a Solaris 10 U11 sparc Generic_150400-11 system and want to use a nfs share mounted using the following options:

remote/read/write/setuid/nodevices/rstchown/bg/hard/nointr/rsize=32768/wsize=32768/proto=tcp/forcedirectio/vers=3/xattr

The underlying network is an aggr of 2 10gbit interfaces. With buffering enabled (no forcedirectio) a mkfile with 8k blksize runs with around 400mbytes/sec.

When forcedirectio is used, the number of ios and throughput decrease dramatically.

Although it's clear to me, that we're facing syncronous writes now, I'm wondering about the limiting factor: when a 2nd job is started the number of io/sec and troughput doubles, latency does ot change.

The same is true with every now job startet in parallel. io/sec and troughput/sec scale, despite the fact, they are still very low. Every process of these jobs is at 99%slp state.

#pragma D option quiet

sched:::off-cpu

/curlwpsinfo->pr_state == SSLEEP && curpsinfo->pr_pid == $1/

{

        self->ts = timestamp;

}

sched:::on-cpu

/self->ts/

{

        @[execname,stack()] = sum(timestamp - self->ts);

        self->ts = 0;

}

tick-15s

{

        normalize(@, 1000000);

        printa("%20s %k %@u\\n", @);

        exit(0);

}

unix`_resume_from_idle+0x1e0

genunix`cv_timedwait_hires+0xb0

rpcmod`clnt_cots_kcallit+0x5f4

nfs`rfscall+0x5f4

nfs`rfs3call+0x60

nfs`nfs3write+0x12c

nfs`nfs3_write+0x7f0

genunix`fop_write+0x20

genunix`write+0x268

unix`syscall_trap32+0xcc

Any hints?

thank you very much,

cheers, Frank

Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Jul 3 2014
Added on Jun 3 2014
2 comments
1,463 views