From nobody Mon Jan 29 17:46:45 2024 X-Original-To: freebsd-questions@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4TNwj94yqtz596Hk for <freebsd-questions@mlmmj.nyi.freebsd.org>; Mon, 29 Jan 2024 17:46:53 +0000 (UTC) (envelope-from freebsd-questions@umpquanet.com) Received: from sfo.umpquanet.com (sfo.umpquanet.com [104.245.33.249]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "umpquanet.com", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4TNwj85kzgz4xp2 for <freebsd-questions@freebsd.org>; Mon, 29 Jan 2024 17:46:52 +0000 (UTC) (envelope-from freebsd-questions@umpquanet.com) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=umpquanet.com header.s=20231023 header.b="LeT+BB/w"; dmarc=none; spf=pass (mx1.freebsd.org: domain of freebsd-questions@umpquanet.com designates 104.245.33.249 as permitted sender) smtp.mailfrom=freebsd-questions@umpquanet.com Received: from sfo.umpquanet.com (localhost [127.0.0.1]) by sfo.umpquanet.com (8.16.1/8.16.1) with ESMTPS id 40THkjlh030076 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO) for <freebsd-questions@freebsd.org>; Mon, 29 Jan 2024 09:46:45 -0800 (PST) (envelope-from freebsd-questions@umpquanet.com) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=umpquanet.com; s=20231023; t=1706550405; bh=pp1BVepX7vcOnSW95SjPc6bRo8ag+6teHLUh7c0aq2M=; h=Date:From:To:Subject; b=LeT+BB/w8I77oFRrK7jv1RLpQ/f7Q7/QuSf2VFZTNAm0J29LtesOV8lS3JAnxNE8L wyW6WDDzSmC2y13uACHyniJk7P9+ySDCNOy09B/lc9FnPyzDAMLxwbBe893M6P/mzs OTGqMLU0EMWkoyGZoT0/MXD71uSV1GUD6Y12dlXOmT84O4Z24d9d3yP7CZF3rtp7Gm OPFyFaqPv85ejIHXkQxJsLPziNG7JhBEC9WqgN+rya5PI1zBVt+PoSot45pmr5/xy3 5w3h06nNl4apK5kltk8jw2t/PbrIWmzNOLfayVWQ67OB0rt9ddjrNfsdDwcLgaXNsB ELb7i/7xpr+TQ== Received: (from james@localhost) by sfo.umpquanet.com (8.16.1/8.16.1/Submit) id 40THkjgw030075 for freebsd-questions@freebsd.org; Mon, 29 Jan 2024 09:46:45 -0800 (PST) (envelope-from freebsd-questions@umpquanet.com) X-Authentication-Warning: sfo.umpquanet.com: james set sender to freebsd-questions@umpquanet.com using -f Date: Mon, 29 Jan 2024 09:46:45 -0800 From: Jim Long <freebsd-questions@umpquanet.com> To: freebsd-questions@freebsd.org Subject: VirtIO/ipfw/natd throughput problem in hosted VM Message-ID: <ZbfkhQXCobk0jKBg@sfo.umpquanet.com> List-Id: User questions <freebsd-questions.freebsd.org> List-Archive: https://lists.freebsd.org/archives/freebsd-questions List-Help: <mailto:questions+help@freebsd.org> List-Post: <mailto:questions@freebsd.org> List-Subscribe: <mailto:questions+subscribe@freebsd.org> List-Unsubscribe: <mailto:questions+unsubscribe@freebsd.org> Sender: owner-freebsd-questions@freebsd.org X-BeenThere: freebsd-questions@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Spamd-Bar: --- X-Spamd-Result: default: False [-3.50 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_SHORT(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[umpquanet.com:s=20231023]; R_SPF_ALLOW(-0.20)[+a]; MIME_GOOD(-0.10)[text/plain]; RCPT_COUNT_ONE(0.00)[1]; ASN(0.00)[asn:6364, ipnet:104.245.32.0/23, country:US]; DMARC_NA(0.00)[umpquanet.com]; MIME_TRACE(0.00)[0:+]; MISSING_XM_UA(0.00)[]; HAS_XAW(0.00)[]; MLMMJ_DEST(0.00)[freebsd-questions@freebsd.org]; MID_RHS_MATCH_FROMTLD(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; ARC_NA(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[freebsd-questions@freebsd.org]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_TLS_LAST(0.00)[]; DKIM_TRACE(0.00)[umpquanet.com:+] X-Rspamd-Queue-Id: 4TNwj85kzgz4xp2 I'm running FreeBSD 14.0-RELEASE in a quad-core, 12G VM commercially hosted under KVM (I'm told). It was installed from the main disc1.iso image, not any of the VM-centric ISOs. # grep -i network /var/run/dmesg.boot virtio_pci0: <VirtIO PCI (legacy) Network adapter> port 0xc000-0xc03f mem 0xfebd1000-0xfebd1fff,0xfe000000-0xfe003fff irq 11 at device 3.0 on pci0 vtnet0: <VirtIO Networking Adapter> on virtio_pci0 # ifconfig public public: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500 options=4c079b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,TXCSUM_IPV6> ether fa:16:3e:ca:b5:9c inet 10.1.170.27 netmask 0xffffff00 broadcast 10.1.170.255 media: Ethernet autoselect (10Gbase-T <full-duplex>) status: active nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> (10.1.170.27 is my obfuscated routable public IP.) Using ipfw *without* any "divert" rule, I get good network speed. Transfering two larger files, one time apiece: # ipfw show 65000 2966704 2831806570 allow ip from any to any 65535 135 35585 deny ip from any to any # 128MB @ > 94MB/s: # rm -f random-data-test-128M # time rsync -Ppv example.com:random-data-test-128M . random-data-test-128M 134,217,728 100% 94.26MB/s 0:00:01 (xfr#1, to-chk=0/1) sent 43 bytes received 134,250,588 bytes 53,700,252.40 bytes/sec total size is 134,217,728 speedup is 1.00 real 0m1.645s user 0m0.826s sys 0m0.788s # 1024MB @ > 105MB/s: # rm -f random-data-test-1G # time rsync -Ppv example.com:random-data-test-1G . random-data-test-1G 1,073,741,824 100% 105.98MB/s 0:00:09 (xfr#1, to-chk=0/1) sent 43 bytes received 1,074,004,060 bytes 102,286,105.05 bytes/sec total size is 1,073,741,824 speedup is 1.00 real 0m9.943s user 0m4.701s sys 0m5.769s But with an "ipfw divert" rule in place (and natd running as 'natd -n public'), across 5 transfers of a 2M file of /dev/random, I get very poor transfer speeds: # ipfw add 65000 divert natd all from any to any via public # ipfw show 60000 3 292 divert 8668 ip from any to any via public 65000 2950208 2817524670 allow ip from any to any 65535 135 35585 deny ip from any to any Test 1 of 5, < 180kB/s: # rm -f random-data-test-2M # time rsync -Ppv example.com:random-data-test-2M . random-data-test-2M 2,097,152 100% 179.08kB/s 0:00:11 (xfr#1, to-chk=0/1) sent 43 bytes received 2,097,752 bytes 167,823.60 bytes/sec total size is 2,097,152 speedup is 1.00 real 0m12.199s user 0m0.085s sys 0m0.027s Test 2 of 5, < 115kB/s: # rm -f random-data-test-2M # rsync -Ppv example.com:random-data-test-2M . random-data-test-2M 2,097,152 100% 114.40kB/s 0:00:17 (xfr#1, to-chk=0/1) sent 43 bytes received 2,097,752 bytes 107,579.23 bytes/sec total size is 2,097,152 speedup is 1.00 real 0m19.300s user 0m0.072s sys 0m0.051s Test 3 of 5, < 37kB/s (almost 57s elapsed time): # rm -f random-data-test-2M # time rsync -Ppv example.com:random-data-test-2M . random-data-test-2M 2,097,152 100% 36.49kB/s 0:00:56 (xfr#1, to-chk=0/1) sent 43 bytes received 2,097,752 bytes 36,483.39 bytes/sec total size is 2,097,152 speedup is 1.00 real 0m56.868s user 0m0.080s sys 0m0.023s Test 4 of 5, < 112kB/s: # rm -f random-data-test-2M # time rsync -Ppv example.com:random-data-test-2M . random-data-test-2M 2,097,152 100% 111.89kB/s 0:00:18 (xfr#1, to-chk=0/1) sent 43 bytes received 2,097,752 bytes 102,331.46 bytes/sec total size is 2,097,152 speedup is 1.00 real 0m19.544s user 0m0.095s sys 0m0.015s Test 5 of 5, 130kB/s: # rm -f random-data-test-2M # time rsync -Ppv example.com:random-data-test-2M . random-data-test-2M 2,097,152 100% 130.21kB/s 0:00:15 (xfr#1, to-chk=0/1) sent 43 bytes received 2,097,752 bytes 127,139.09 bytes/sec total size is 2,097,152 speedup is 1.00 real 0m16.583s user 0m0.072s sys 0m0.035s How can I tweak my network stack to get reasonable throughput from natd? I'm happy to respond to requests for additional details. Thank you!