Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 19 Sep 2018 13:57:04 +0000
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        KIRIYAMA Kazuhiko <kiri@kx.openedu.org>, "Andrey V. Elsukov" <bu7cher@yandex.ru>
Cc:        "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>
Subject:   Re: NFS poor performance in ipfw_nat
Message-ID:  <YTOPR0101MB1820C225901C5716B4265725DD1C0@YTOPR0101MB1820.CANPRD01.PROD.OUTLOOK.COM>
In-Reply-To: <201809190258.w8J2w72D053986@kx.openedu.org>
References:  <201809172253.w8HMrXSS025987@kx.openedu.org> <8315728b-afe9-7631-d2ad-2d9b06c3d72d@yandex.ru> <201809190033.w8J0X0J5051781@kx.openedu.org>, <201809190258.w8J2w72D053986@kx.openedu.org>

next in thread | previous in thread | raw e-mail | index | archive | help
KIRIYAMA Kazuhiko wrote:
[good stuff snipped]
>
> Thanks for your advice. Add '-lro' and '-tso' to ifconfig,
> transfer rate up to almost native NIC speed:
>
> # dd if=3D/dev/zero of=3D/.dake/tmp/foo.img bs=3D1k count=3D1m
> 1048576+0 records in
> 1048576+0 records out
> 1073741824 bytes transferred in 10.688162 secs (100460852 bytes/sec)
> #
>
> BTW in VM on behyve, transfer rate to NFS mount of VM server
> (bhyve) is appreciably low level:
>
> # dd if=3D/dev/zero of=3D/.dake/tmp/foo.img bs=3D1k count=3D1m
> 1048576+0 records in
> 1048576+0 records out
> 1073741824 bytes transferred in 32.094448 secs (33455687 bytes/sec)
>
>This was limited by disk transfer speed:
>
># dd if=3D/dev/zero of=3D/var/tmp/foo.img bs=3D1k count=3D1m
>1048576+0 records in
>1048576+0 records out
>1073741824 bytes transferred in 21.692358 secs (49498623 bytes/sec)
>#
It sounds like this is resolved, thanks to Andrey.

If you have more problems like this, another thing to try is reducing the I=
/O
size with mount options at the client.
For example, you might try adding "rsize=3D4096,wsize=3D4096" to your mount=
 and
then increase the size by powers of 2 (8192, 16384,32768) and see which siz=
e
works best. (This is another way to work around TSO problems. It also helps
when a net interface or packet filter can't keep up with a burst of 40+ eth=
ernet
packets, which is what is generated when 64K I/O is used.)

Btw, doing "nfsstat -m" on the client will show you what mount options are
actually being used. This can be useful information.

Good to hear it has been resolved, rick
[more stuff snipped]




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?YTOPR0101MB1820C225901C5716B4265725DD1C0>