Skip to Main Content

Infrastructure Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Network buffer size, Symantec NetBackup & Solaris 10

807557Jan 16 2008
I have been doing a lot of testing of NetBackup (5.1MP5) on Solaris 10 (120011-14) and I've found a strange behaviour that someone on this forum may be able to explain. I think it is more Solaris than NetBackup which is why I'm posting here. In particular, last year when my servers were on an earlier version of Solaris 10 I think I had better performance than I do now. This is a v240 with all 4 bge interfaces aggregated together.

To make NetBackup go well over the network the old wisdom was to use ndd to increase the TCP parameters:

/usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 65535
/usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 65535

But we now set much larger values:

/usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 4194304
/usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 4194304

/usr/sbin/ndd -set /dev/tcp tcp_max_buf 4194304
/usr/sbin/ndd -set /dev/tcp tcp_cwnd_max 2097152

In NetBackup I am asking it to use a network buffer size of 512K, however using netstat -f inet I was seeing the Rwind value hovering at much lower values, as if the value used was 65535 or thereabouts.

Looking at Netbackup logging I could see the buffer size being requested, and then it appears to be overridden:

bptm/log.011508:13:12:01.829 [8366] <2> io_set_recvbuf: setting receive network buffer to 524288 bytes
bptm/log.011508:13:12:01.829 [8366] <2> io_set_recvbuf: receive network buffer is 65160 bytes

That is the background, what I found interesting was that when I follow the instructions in the Tunable Parameters Reference Manual (Page 165) to set the per-route metrics:

root ~: route get default
route to: default
destination: default
mask: default
gateway: xxx.xxx.xxx.xxx
interface: aggr1
flags: <UP,GATEWAY,DONE,STATIC>
recvpipe sendpipe ssthresh rtt,ms rttvar,ms hopcount mtu expire
0 0 0 0 0 0 1500 0
root ~: route change default -recvpipe 1048576 -sendpipe 1048576
change net default
root /usr/openv/netbackup/logs/bptm: route get default
route to: default
destination: default
mask: default
gateway: xxx.xxx.xxx.xxx
interface: aggr1
flags: <UP,GATEWAY,DONE,STATIC>
recvpipe sendpipe ssthresh rtt,ms rttvar,ms hopcount mtu expire
1048576 1048576 0 0 0 0 1500 0

Having done that, NetBackup behaves differently:

17:21:32.967 [2571] <2> io_set_recvbuf: setting receive network buffer to 524288 bytes
17:21:32.967 [2571] <2> io_set_recvbuf: receive network buffer is 1049800 bytes

netstat -f inet also shows the 1MB Rwind value.

And the performance is sharply better, i.e. I'm getting more or less what I expected.

So my question is, why is the ndd setting not working? The per-route change is supposed to only need to be done if I want to change the settings for one interface.

I'd like to test this on an earlier Solaris 10, but that will require a rebuild.
Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Feb 13 2008
Added on Jan 16 2008
0 comments
367 views