From owner-freebsd-net@freebsd.org Wed Feb 19 06:04:03 2020 Return-Path: Delivered-To: freebsd-net@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 75BFB25018F for ; Wed, 19 Feb 2020 06:04:03 +0000 (UTC) (envelope-from kiri@truefc.org) Received: from kx.truefc.org (1.212.52.36.ap.yournet.ne.jp [36.52.212.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp", Issuer "smtp" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 48MnGm61dtz4k6b for ; Wed, 19 Feb 2020 06:04:00 +0000 (UTC) (envelope-from kiri@truefc.org) Received: from kx.truefc.org (kx.truefc.org [202.216.24.26]) by kx.truefc.org (8.15.2/8.15.2) with ESMTP id 01J63naa005208; Wed, 19 Feb 2020 15:03:49 +0900 (JST) (envelope-from kiri@kx.truefc.org) Message-Id: <202002190603.01J63naa005208@kx.truefc.org> Date: Wed, 19 Feb 2020 15:03:49 +0900 From: KIRIYAMA Kazuhiko To: "freebsd-net@freebsd.org" Cc: kiri@truefc.org Subject: How to work with in 1GbE network ? User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL/10.8 MULE XEmacs/21.4 (patch 24) (Standard C) (amd64--freebsd) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: 48MnGm61dtz4k6b X-Spamd-Bar: ++ Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=none (mx1.freebsd.org: domain of kiri@truefc.org has no SPF policy when checking 36.52.212.1) smtp.mailfrom=kiri@truefc.org X-Spamd-Result: default: False [2.95 / 15.00]; ARC_NA(0.00)[]; TO_DN_EQ_ADDR_SOME(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; DMARC_NA(0.00)[truefc.org]; AUTH_NA(1.00)[]; NEURAL_SPAM_MEDIUM(0.81)[0.809,0]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCPT_COUNT_TWO(0.00)[2]; NEURAL_SPAM_LONG(0.12)[0.116,0]; R_SPF_NA(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; SUBJECT_ENDS_QUESTION(1.00)[]; ASN(0.00)[asn:10013, ipnet:36.52.208.0/21, country:JP]; MIME_TRACE(0.00)[0:+]; IP_SCORE(0.12)[asn: 10013(0.58), country: JP(0.04)]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Feb 2020 06:04:03 -0000 Hi, all I wonder how to work ixgbe in 1GbE network. I tried to test in below: internet | +-------+--------+ | Netgear JGS516 | +---+-----+------+ +----------------------+ | +---------+ 13.0-CURRENT(r356739)| src_host | +----------------------+ | +----------------------+ +----+ 13.0-CURRENT(r353025)| dest_host +----------------------+ And try to NFS mount dest_host in src_host, but mount does not work smoothly. It takes about 9 second !!! : # /usr/bin/time time* timeout* # /usr/bin/time -h mount -t nfs dest_host:/.dake /.dake 9.15s real 0.04s user 0.02s sys # nfsstat -m dest_host:/.dake on /.dake nfsv3,tcp,resvport,hard,cto,lockd,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=65536,readahead=1,wcommitsize=16777216,timeout=120,retrans=2 # /usr/bin/time -h umount /.dake 27.26s real 0.04s user 0.02s sys src_host to dest_host was set to mtu 9000: # route get dest_host route to: xxx.xxx.xxx.xxx.foo destination: xxx.xxx.xxx.xxx.foo mask: xxx.xxx.xxx.xxx fib: 0 interface: ix0 flags: recvpipe sendpipe ssthresh rtt,msec mtu weight expire 0 0 0 0 9000 1 0 # What's wrong ? src_host environments are as follows: # uname -a FreeBSD src_host 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r356739M: Tue Jan 28 21:49:59 JST 2020 root@msrvkx:/usr/obj/usr/src/amd64.amd64/sys/XIJ amd64 # ifconfig ix0 ix0: flags=8843 metric 0 mtu 9000 options=4e538bb ether 3c:ec:ef:01:a4:e0 inet xxx.xxx.xxx.xxx netmask 0xfffffff8 broadcast xxx.xxx.xxx.xxx media: Ethernet autoselect (1000baseT ) status: active nd6 options=29 # sysctl -a|grep jumbo kern.ipc.nmbjumbo16: 680520 kern.ipc.nmbjumbo9: 1209814 kern.ipc.nmbjumbop: 4083125 vm.uma.mbuf_jumbo_16k.stats.xdomain: 0 vm.uma.mbuf_jumbo_16k.stats.fails: 0 vm.uma.mbuf_jumbo_16k.stats.frees: 0 vm.uma.mbuf_jumbo_16k.stats.allocs: 0 vm.uma.mbuf_jumbo_16k.stats.current: 0 vm.uma.mbuf_jumbo_16k.domain.0.wss: 0 vm.uma.mbuf_jumbo_16k.domain.0.imin: 0 vm.uma.mbuf_jumbo_16k.domain.0.imax: 0 vm.uma.mbuf_jumbo_16k.domain.0.nitems: 0 vm.uma.mbuf_jumbo_16k.limit.bucket_cnt: 0 vm.uma.mbuf_jumbo_16k.limit.bucket_max: 18446744073709551615 vm.uma.mbuf_jumbo_16k.limit.sleeps: 0 vm.uma.mbuf_jumbo_16k.limit.sleepers: 0 vm.uma.mbuf_jumbo_16k.limit.max_items: 680520 vm.uma.mbuf_jumbo_16k.limit.items: 0 vm.uma.mbuf_jumbo_16k.keg.domain.0.free: 0 vm.uma.mbuf_jumbo_16k.keg.domain.0.pages: 0 vm.uma.mbuf_jumbo_16k.keg.efficiency: 99 vm.uma.mbuf_jumbo_16k.keg.align: 7 vm.uma.mbuf_jumbo_16k.keg.ipers: 1 vm.uma.mbuf_jumbo_16k.keg.ppera: 4 vm.uma.mbuf_jumbo_16k.keg.rsize: 16384 vm.uma.mbuf_jumbo_16k.keg.name: mbuf_jumbo_16k vm.uma.mbuf_jumbo_16k.bucket_size_max: 253 vm.uma.mbuf_jumbo_16k.bucket_size: 253 vm.uma.mbuf_jumbo_16k.flags: 0x43a10000 vm.uma.mbuf_jumbo_16k.size: 16384 vm.uma.mbuf_jumbo_9k.stats.xdomain: 0 vm.uma.mbuf_jumbo_9k.stats.fails: 0 vm.uma.mbuf_jumbo_9k.stats.frees: 0 vm.uma.mbuf_jumbo_9k.stats.allocs: 0 vm.uma.mbuf_jumbo_9k.stats.current: 0 vm.uma.mbuf_jumbo_9k.domain.0.wss: 0 vm.uma.mbuf_jumbo_9k.domain.0.imin: 0 vm.uma.mbuf_jumbo_9k.domain.0.imax: 0 vm.uma.mbuf_jumbo_9k.domain.0.nitems: 0 vm.uma.mbuf_jumbo_9k.limit.bucket_cnt: 0 vm.uma.mbuf_jumbo_9k.limit.bucket_max: 18446744073709551615 vm.uma.mbuf_jumbo_9k.limit.sleeps: 0 vm.uma.mbuf_jumbo_9k.limit.sleepers: 0 vm.uma.mbuf_jumbo_9k.limit.max_items: 1209814 vm.uma.mbuf_jumbo_9k.limit.items: 0 vm.uma.mbuf_jumbo_9k.keg.domain.0.free: 0 vm.uma.mbuf_jumbo_9k.keg.domain.0.pages: 0 vm.uma.mbuf_jumbo_9k.keg.efficiency: 75 vm.uma.mbuf_jumbo_9k.keg.align: 7 vm.uma.mbuf_jumbo_9k.keg.ipers: 1 vm.uma.mbuf_jumbo_9k.keg.ppera: 3 vm.uma.mbuf_jumbo_9k.keg.rsize: 9216 vm.uma.mbuf_jumbo_9k.keg.name: mbuf_jumbo_9k vm.uma.mbuf_jumbo_9k.bucket_size_max: 253 vm.uma.mbuf_jumbo_9k.bucket_size: 253 vm.uma.mbuf_jumbo_9k.flags: 0x43010000 vm.uma.mbuf_jumbo_9k.size: 9216 vm.uma.mbuf_jumbo_page.stats.xdomain: 0 vm.uma.mbuf_jumbo_page.stats.fails: 0 vm.uma.mbuf_jumbo_page.stats.frees: 2199 vm.uma.mbuf_jumbo_page.stats.allocs: 67734 vm.uma.mbuf_jumbo_page.stats.current: 65535 vm.uma.mbuf_jumbo_page.domain.0.wss: 0 vm.uma.mbuf_jumbo_page.domain.0.imin: 0 vm.uma.mbuf_jumbo_page.domain.0.imax: 0 vm.uma.mbuf_jumbo_page.domain.0.nitems: 0 vm.uma.mbuf_jumbo_page.limit.bucket_cnt: 0 vm.uma.mbuf_jumbo_page.limit.bucket_max: 18446744073709551615 vm.uma.mbuf_jumbo_page.limit.sleeps: 0 vm.uma.mbuf_jumbo_page.limit.sleepers: 0 vm.uma.mbuf_jumbo_page.limit.max_items: 4083125 vm.uma.mbuf_jumbo_page.limit.items: 67298 vm.uma.mbuf_jumbo_page.keg.domain.0.free: 0 vm.uma.mbuf_jumbo_page.keg.domain.0.pages: 67298 vm.uma.mbuf_jumbo_page.keg.efficiency: 97 vm.uma.mbuf_jumbo_page.keg.align: 7 vm.uma.mbuf_jumbo_page.keg.ipers: 1 vm.uma.mbuf_jumbo_page.keg.ppera: 1 vm.uma.mbuf_jumbo_page.keg.rsize: 4096 vm.uma.mbuf_jumbo_page.keg.name: mbuf_jumbo_page vm.uma.mbuf_jumbo_page.bucket_size_max: 253 vm.uma.mbuf_jumbo_page.bucket_size: 253 vm.uma.mbuf_jumbo_page.flags: 0x43a10000 vm.uma.mbuf_jumbo_page.size: 4096 # sysctl -a | grep nmbclusters kern.ipc.nmbclusters: 8166250 # sysctl -a | grep intr_storm_threshold hw.intr_storm_threshold: 0 # and dest_host environments are as follows: # uname -a FreeBSD dest_host 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r353025: Thu Oct 3 19:38:47 JST 2019 admin@dest_host:/ds/obj/current/13.0/r353025/ds/src/current/13.0/r353025/amd64.amd64/sys/GENERIC amd64 # ifconfig igb0 igb0: flags=8943 metric 0 mtu 9000 options=4a520b9 ether 0c:c4:7a:b3:cf:d4 inet xxx.xxx.xxx.xxx netmask 0xfffffff8 broadcast xxx.xxx.xxx.xxx media: Ethernet autoselect (1000baseT ) status: active nd6 options=29 # sysctl -a|grep jumbo kern.ipc.nmbjumbo16: 339123 kern.ipc.nmbjumbo9: 602886 kern.ipc.nmbjumbop: 2034741 # sysctl -a | grep nmbclusters kern.ipc.nmbclusters: 4069482 # sysctl -a | grep intr_storm_threshold hw.intr_storm_threshold: 0 # Best regards --- Kazuhiko Kiriyama