From nobody Fri Jun 30 16:26:39 2023 X-Original-To: freebsd-transport@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4Qt1102HmXz4k27M for ; Fri, 30 Jun 2023 16:26:44 +0000 (UTC) (envelope-from muralik1@vmware.com) Received: from CO1PR02CU001.outbound.protection.outlook.com (mail-westus2azlp170110001.outbound.protection.outlook.com [IPv6:2a01:111:f403:c007::1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "DigiCert Cloud Services CA-1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Qt10z6lkBz3Llb; Fri, 30 Jun 2023 16:26:43 +0000 (UTC) (envelope-from muralik1@vmware.com) Authentication-Results: mx1.freebsd.org; none ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=F+0PRhK1apKVXl8vlr1CiP+hzz1H3evslWwKvVNbkCmkthDkJYQtC2UcMvIShk0jy09rhGYieJvRZ60ES8pkhx4qu1s0P7YvyeGj3U2gbliBL6BHg2jQIw+svzGBYXzU7VRW1oliDcaTEOHpTOqHZvvywRoJ2bJTISdM7UazOV4MEfQoP59hYtCc/CoNRGzvoce6kLYzWyX3yJORZuuYgcqNyPKtnG2YJx+6Ku1upz4JrMs/H+nwkPFS+DFmgLzVnchTFdcgVnYxBrfi+MO23qRttlMiAr+82ZeUxXus/wbrc6gri1DOk5cZzngCTHz1j5ovhVfiiVN9GPS70FQQOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=G34u697bFNTJPyps0xe2sJ0zHCKtFMvlZZfAKSlLkts=; b=ka92ait8/7kHdDHFJM2bGk1Tyiaw2DAHm2Mr3Oj7S4r9oqFrBaVtZDQT1Jl5ZltHmBRNv2BHO1ou6uygSfDwlAJBvmVlQpgdT43mfvaQTu9pohbBT1hBdq1ieb2VzgFDOF9XOe3fi5YbB5taxkSKTONcDmbfClMnJRh3do3buwj3gtdXh82X7tFQ8rc+RXXhkcQkw5domEhjwroN99/O6aTvGpzsRqekPy3Yri+UhNa1l5Z5apfWKPymlLG+iptXB7h0YhTvwW4EKQQ6rufmA4Eia5r06KV8LQxhbIFhobjAaM3obXQtW0f8LWY3632MyBanKx+pNyklBFL6dEUsqg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=vmware.com; dmarc=pass action=none header.from=vmware.com; dkim=pass header.d=vmware.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vmware.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=G34u697bFNTJPyps0xe2sJ0zHCKtFMvlZZfAKSlLkts=; b=sAxl6ZejDo7DsEQc5H6qBhYmZFqZ9ff6T3IuyiHpSFoW5+FuS/eMZO9w/jbzqNmXpQYVHeTtU+jq/xDxMtKyQ32hBj0BiX+z8f2vbsY+P+diDlasHIvpxaCmkRp3qgE2cgsrR7F3FYv6EsGnPPXtHNA4LJ1LTkXKxfz6NSusrWo= Received: from PH0PR05MB10064.namprd05.prod.outlook.com (2603:10b6:510:29d::8) by DS0PR05MB9620.namprd05.prod.outlook.com (2603:10b6:8:146::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6521.24; Fri, 30 Jun 2023 16:26:39 +0000 Received: from PH0PR05MB10064.namprd05.prod.outlook.com ([fe80::c63e:75e8:e8bf:a593]) by PH0PR05MB10064.namprd05.prod.outlook.com ([fe80::c63e:75e8:e8bf:a593%4]) with mapi id 15.20.6544.019; Fri, 30 Jun 2023 16:26:39 +0000 From: Murali Krishnamurthy To: "Scheffenegger, Richard" , FreeBSD Transport Subject: Re: FreeBSD TCP (with iperf3) comparison with Linux Thread-Topic: FreeBSD TCP (with iperf3) comparison with Linux Thread-Index: AQHZq2y0eOhY5ZSu6EKxBx6E7aUjbq+jhpqC Date: Fri, 30 Jun 2023 16:26:39 +0000 Message-ID: References: <53aff274-b1a8-0730-6971-2755c7e7b688@freebsd.org> In-Reply-To: <53aff274-b1a8-0730-6971-2755c7e7b688@freebsd.org> Accept-Language: en-IN, en-US Content-Language: en-IN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-traffictypediagnostic: PH0PR05MB10064:EE_|DS0PR05MB9620:EE_ x-ms-office365-filtering-correlation-id: 0bf77515-5d4e-4a84-40fb-08db7986c82d x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: fExt5t536WvsSKNxrBO2eYcnJSJ52DMtlVWJQdOO1vL66AQqqQBYa8ZV5WYfb5jy02FB+MgoMUap/9xketZIY+ylc89SSquu6GykJu4Sy21f+peAafiXopUW6VZs4vFBnqDK96G4cjv7Y3OTqxAt8ZOpRG1rvUUUq8l8bD3I0cpCopxIzR58ObgWtfBRQ6EXENPh1xWWHfp6g7goJPZTf+5w2PxVhtCflDWRQRgfOLMdrtFZPf/6de2YD2I7Rs53MIUbefbeJkOAA3lMteo0qMoLhpky+D7N3hkC4pIsY9pKy1uDFiw6wdhGtlYwao2hUx4Gg+TVVJEJRtGR0NAJEFdWNESkRbUbOmFsQbuBZ6hafR4OxBV4OoDLYMzY26Btd9DPycuhpDjNkb3nnP5pccPmECTLre+YSVtuzfg+LMKM/ROYFS+0+MGocHtjfuPhG6AzAtw8oTwql5OFIVyA/kIJ4UptuppUmcK7e7uz9LjZxlHCViP99CsG7UpaMten2AH1aFQbWvjsDvTSCPNFi8NtutITmnR9Yn6zwkEux+Hat3vg7K8G5JTpmMF5mgQ8m+2zSO7UnY5Z1zG06yviN6wz4FIaYhg5uNBpMZH+oDBMy4vmtuGhx1zhvh3N3HkzN9lUDPtkNHQrRGMOniQEoML3w7jepBrRLqnF5alnwt4= x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR05MB10064.namprd05.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(39860400002)(366004)(396003)(451199021)(52536014)(5660300002)(64756008)(66556008)(8936002)(66446008)(76116006)(66946007)(316002)(450100002)(33656002)(66476007)(91956017)(8676002)(38070700005)(478600001)(2906002)(6506007)(110136005)(55016003)(86362001)(41300700001)(26005)(7696005)(53546011)(71200400001)(186003)(9686003)(122000001)(38100700002)(83380400001);DIR:OUT;SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?Windows-1252?Q?we+zrSevzezuYLUztWAsZiyB/N5xzqG5gC22ZBwbj/nVx6dUJME68fna?= =?Windows-1252?Q?WHSjpba4bh1vS4QoqQ6w7yzagWNqXTx6HIzo2rlbmO7+rkTXmGZt4meu?= =?Windows-1252?Q?yn1AOBfRsr+SWTn9Z5Up+YLMO9vq5XLAl3/Yt6UJ9uEHAcu4ZmMMr8OP?= =?Windows-1252?Q?3XJ/R/+CfYp0Vd9UUALjDKjtk4u/PtD4tN1YbqX7qtXwP6hwvPmzeQkA?= =?Windows-1252?Q?P7GebhLNunVgoJEHRtoKDxWC8KHAAnAs+KH1QRD+f3YZ8hd7N6TGQkU3?= =?Windows-1252?Q?gSM8I0lkuKBpo+uDrjBwmJFIbPRV0g9GfhmabxnJTzVHlbDZphqCn/It?= =?Windows-1252?Q?7/syLPWv16v3Un5leAMbHbfUCnd9FhQ6sOl1eoET1f95JVVEiXZdzSWY?= =?Windows-1252?Q?lhG+6xDV6Fij4KNS2ijROH/hbeWz2qxOBinUmZGZj3KRBkyPLLqZ14dV?= =?Windows-1252?Q?xWp5SPSUIc39ty5OK4F8m2foqWY6CFZgnetdEFc6n6QCJfoRmnH3dRH1?= =?Windows-1252?Q?G0sKTELPlaSRkKDwR2R1qZvQktHlV5x2zfo6oc9fXxcU+YuBDQBJXep5?= =?Windows-1252?Q?55D4JuypYNzkAW/pmnn6e4eKxm9WVBSnlXvQvR+kaRyUu2YsnyMUxLTm?= =?Windows-1252?Q?rxLEWO/0XR0lLyY71KXYV4y9guDtWtEGmYqsYboGfyl4jPU2rNJsejHa?= =?Windows-1252?Q?FP0PI1rkrAazdstTi/AodKhlk4SfvSrL16bRK8S8AOgrXHYBbb10HPhR?= =?Windows-1252?Q?ExPO66ChqoAOo3aYiEmZ6zY1zkt6MF55hgHG4arCCue9yRZB5I2N0FSK?= =?Windows-1252?Q?1z+nrQGypVQrGW94DJvphuzJpzvEXYMQEeHRCdEDxkalfcYcLU/AUe0Q?= =?Windows-1252?Q?TVwJhkCql2oUZLA+5bsT6cLTlnijCl7jFg3wfiSTCmGC/16lHOBlvGU9?= =?Windows-1252?Q?KzU8uboxnVBxkllZlNADgzqYn/0yZkaB36mmuq9Vwd9d3jT+SvGwSZBc?= =?Windows-1252?Q?h6fvQ+PJAHkD6iZC4f3QwZd7lhVo5vgFHCfSyZkLDGaJDgBE58OdmdPD?= =?Windows-1252?Q?PVYrirAjwCmf7gSIvTLR7qfafK9qag46wq9lIq263Po5FbxfabiOm1DZ?= =?Windows-1252?Q?Kc0XG6JmfgfbxjrmXKojO8KHCC5HC7SWN5F6joThg+Ncr1poDdSgeOhz?= =?Windows-1252?Q?tRFnGHFOd65JTax77O7nLBenaqQlxs8sPu/ZcNS6ojf3y8t4artdHgR4?= =?Windows-1252?Q?SyYGvM5ZIYY68RzvaGLJMrTdps495r2thivrKQX6/DKzKz3AFq6DChN7?= =?Windows-1252?Q?THVpuj+Kx5LTUNenAUu7srbXEAnNPuImA2S7kFHA73eTBSQDBFZ8UgAn?= =?Windows-1252?Q?SHF8Uz6L3v0a7WejWnmGp5xqe3W3/gYFSFVpQPLmUM1zEWPkAm2qhpB0?= =?Windows-1252?Q?IZSidf39NLgA6CL4SdAFwkjV3dqtoYhcp1S9XBgQLaVk64kpWi+l+Hu6?= =?Windows-1252?Q?tujyehR8KalRct2JD132wqjkEU+L922lLW5GINqo86n6GuHVwyC1MP8T?= =?Windows-1252?Q?4cuefNsEIVf1+i32W4Fn6hnR9+I9tc5/JpGtPKFJHTtQrJIUdiE7uOEW?= =?Windows-1252?Q?S4LRNrUT8PbuApOQseFN56+CVcaUJzlur8qzxK75+wcv2qJgfeDn42A0?= =?Windows-1252?Q?efq3UWZ7MjE6a8Gde0ixYZVf9nhQEi4/ZtNOdlaSOAAa2/KWM9hCcg?= =?Windows-1252?Q?=3D=3D?= Content-Type: multipart/alternative; boundary="_000_PH0PR05MB100642BD041192E6B7EBDBFE1FB2AAPH0PR05MB10064na_" List-Id: Discussions List-Archive: https://lists.freebsd.org/archives/freebsd-transport List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-transport@freebsd.org X-BeenThere: freebsd-transport@freebsd.org MIME-Version: 1.0 X-OriginatorOrg: vmware.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: PH0PR05MB10064.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0bf77515-5d4e-4a84-40fb-08db7986c82d X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Jun 2023 16:26:39.3457 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: ps3qXcNl35pq3Sp7lYVyF59NKCdyDPWd1AiwPOSl0h4g1GrU5jSMCcwaZnrL06j1nkqDXY0D0SLGl1QdkK12Qw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR05MB9620 X-Rspamd-Queue-Id: 4Qt10z6lkBz3Llb X-Spamd-Bar: ---- X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:8075, ipnet:2a01:111:f000::/36, country:US] X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-ThisMailContainsUnwantedMimeParts: N --_000_PH0PR05MB100642BD041192E6B7EBDBFE1FB2AAPH0PR05MB10064na_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable Richard, Appreciate the useful inputs you have shared so far. Will try to figure out= regarding packet drops. Regarding HyStart, I see even BSD code base has support for this. May I kno= w by when can we see that in an release, if not already available ? Regarding this point : =93Switching to other cc modules may give some more = insights. But again, I suspect that momentary (microsecond) burstiness of B= SD may be causing this significantly higher loss rate.=94 Is there some info somewhere where I can understand more on this in detail? Regards Murali On 30/06/23, 9:35 PM, "owner-freebsd-transport@freebsd.org" wrote: Hi Murali, > Q. Since you mention two hypervisors - what is the phyiscal network topol= ogy in between these two servers? What theoretical link rates would be atta= inable? > > Here is the topology > > Iperf end points are on 2 different hypervisors. > > =97=97=97=97=97=97=97=97=97=97=97 =97=97=97=97=97=97=97=97=97=97= =97=97=97=97=97=97 = =97=97=97=97=97=97 =97=97=97=97=97= =97-=97 > | Linux VM1 | | BSD 13 VM 1 | = | Linux VM2 | = | BSD 13 VM 2 | > |___________| |_ ____ ____ ___ | = |___________ | = |_ ____ ____ ___ | > | | | = | = | > | | = | = | > =97=97=97=97=97=97=97=97=97=97=97=97=97=97=97 = =97=97=97=97=97=97=97= =97=97=97=97=97=97=97=97 > | ESX Hypervisor 1 | 10G link connected via = L2 Switch | ESX Hypervisor 2 | > | |=97=97=97=97=97=97=97=97= =97=97=97=97=97=97=97=97=97=97=97=97=97=97=97=97 | = | > |=97=97=97=97=97=97=97=97=97=97=97=97=97=97 | = |=97=97=97=97=97=97= =97=97=97=97=97=97=97=97| > > > Nic is of 10G capacity on both ESX server and it has below config. So, when both VMs run on the same Hypervisor, maybe with another VM to simu= late the 100ms delay, can you attain a lossless baseline scenario? > BDP for 16MB Socket buffer: 16 MB * (1000 ms * 100ms latency) * 8 bits/ 1= 024 =3D 1.25 Gbps > > So theoretically we should see close to 1.25Gbps of Bitrate and we see Li= nux reaching close to this number. Under no loss, yes. > But BSD is not able to do that. > > > Q. Did you run iperf3? Did the transmitting endpoint report any retransmi= ssions between Linux or FBSD hosts? > > Yes, we used iper3. I see Linux doing less number retransmissions compare= d to BSD. > On BSD, the best performance was around 600 Mbps bitrate and the number o= f retransmissions for this number seen is around 32K > On Linux, the best performance was around 1.15 Gbps bitrate and the numbe= r of retransmissions for this number seen is only 2K. > So as you pointed the number of retransmissions in BSD could be the real = issue here. There are other cc modules available; but I believe one major deviation is = that Linux can perform mechanisms like hystart; ACKing every packet when th= e client detects slow start; perform pacing to achieve more uniform packet = transmissions. I think the next step would be to find out, at which queue those packet dis= cards are coming from (external switch? delay generator? Vswitch? Eth stack= inside the VM?) Or alternatively, provide your ESX hypervisors with vastly more link speed,= to rule out any L2 induced packet drops - provided your delay generator is= not the source when momentarily overloaded. > Is there a way to reduce this packet loss by fine tuning some parameters = w.r.t ring buffer or any other areas? Finding where these arise (looking at queue and port counters) would be the= next step. But this is not really my specific area of expertise beyond the= high level, vendor independent observations. Switching to other cc modules may give some more insights. But again, I sus= pect that momentary (microsecond) burstiness of BSD may be causing this sig= nificantly higher loss rate. TCP RACK would be another option. That stack has pacing, more fine-grained = timing, the RACK loss recovery mechanisms etc. Maybe that helps reduce the = observed packet drops by iperf, and consequently, yield a higher overall th= rouhgput. --_000_PH0PR05MB100642BD041192E6B7EBDBFE1FB2AAPH0PR05MB10064na_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable

Richard,<= o:p>

&nbs= p;

Appreciat= e the useful inputs you have shared so far. Will try to figure out regardin= g packet drops.

&nbs= p;

Regarding= HyStart, I see even BSD code base has support for this. May I know by when= can we see that in an release, if not already available ?

&nbs= p;

Regarding= this point : =93Switching to other cc modules may give some more insigh= ts. But again, I suspect that momentary (microsecond) burstiness of BSD may= be causing this significantly higher loss rate.=94

Is there = some info somewhere where I can understand more on this in detail?

&nbs= p;

Regards

Murali

&nbs= p;

&nbs= p;

On 30/06/23, 9:35 PM,= "owner-freebsd-transport@freebsd.org" <owner-freebsd-transpor= t@freebsd.org> wrote:

 

Hi Murali,

 

> Q. Since you mention two hypervisors - what is = the phyiscal network topology in between these two servers? What theoretica= l link rates would be attainable?

>  

> Here is the topology

>

> Iperf end points are on 2 different hypervisors= .

>

>  =97=97=97=97=97=97=97=97=97=97=97&nb= sp;       =97=97=97=97=97=97=97=97=97=97= =97=97=97=97=97=97         &nb= sp;            =             &nb= sp;            =             &nb= sp;            =          =97=97=97=97=97=97 &n= bsp;            = ;  =97=97=97=97=97=97-=97      &nbs= p;            &= nbsp;           &nbs= p;            &= nbsp;           &nbs= p;            &= nbsp;           &nbs= p;            &= nbsp;           &nbs= p;            &= nbsp;      

> | Linux VM1 |     &nbs= p;|  BSD 13 VM 1  |      =             &nb= sp;            =             &nb= sp;            =             &nb= sp;            |&nbs= p; Linux VM2  |       &nb= sp;        |  BSD 13 VM 2=   |

> |___________|     &nbs= p;|_ ____ ____ ___ |         &= nbsp;           &nbs= p;            &= nbsp;           &nbs= p;            &= nbsp;           &nbs= p;          |___________ = |            &n= bsp;   |_ ____ ____ ___ |

> |       &nbs= p;  |          =             &nb= sp;  |          &nbs= p;            &= nbsp;           &nbs= p;            &= nbsp;           &nbs= p;            &= nbsp;           &nbs= p;            &= nbsp;        |    &n= bsp;            = ;            &n= bsp;     |

>        =    |          &= nbsp;           &nbs= p;   |         =             &nb= sp;            =             &nb= sp;            =             &nb= sp;            =             &nb= sp;          |  &nbs= p;            &= nbsp;           &nbs= p;       |

> =97=97=97=97=97=97=97=97=97=97=97=97=97=97=97&n= bsp;            = ;            &n= bsp;            = ;            &n= bsp;            = ;            &n= bsp;      =97=97=97=97=97=97=97=97=97=97=97= =97=97=97=97

> |       &nbs= p;   ESX Hypervisor 1       &n= bsp;  |         &nbs= p; 10G link connected via L2 Switch      &nbs= p;            &= nbsp;  |         &nb= sp; ESX Hypervisor  2       &n= bsp;    |

> |       &nbs= p;            &= nbsp;           &nbs= p;            &= nbsp; |=97=97=97=97=97=97=97=97=97=97=97=97=97=97=97=97=97=97=97=97=97=97= =97=97   |         &= nbsp;           &nbs= p;            &= nbsp;           &nbs= p; |

> |=97=97=97=97=97=97=97=97=97=97=97=97=97=97 |&n= bsp;            = ;            &n= bsp;            = ;            &n= bsp;            = ;            &n= bsp;       |=97=97=97=97=97=97=97=97=97=97=97= =97=97=97|

>

>

> Nic is of 10G capacity on both ESX server and i= t has below config.

 

 

So, when both VMs run on the same Hypervisor, maybe = with another VM to simulate the 100ms delay, can you attain a lossless base= line scenario?

 

 

> BDP for 16MB Socket buffer: 16 MB * (1000 ms * = 100ms latency) * 8 bits/ 1024 =3D 1.25 Gbps

>

> So theoretically we should see close to 1.25Gbp= s of Bitrate and we see Linux reaching close to this number.

 

Under no loss, yes.

 

 

> But BSD is not able to do that.

>

>

> Q. Did you run iperf3? Did the transmitting end= point report any retransmissions between Linux or FBSD hosts?

>

> Yes, we used iper3. I see Linux doing less numb= er retransmissions compared to BSD.

> On BSD, the best performance was around 600 Mbp= s bitrate and the number of retransmissions for this number seen is around = 32K

> On Linux, the best performance was around 1.15 = Gbps bitrate and the number of retransmissions for this number seen is only= 2K.

> So as you pointed the number of retransmissions= in BSD could be the real issue here.

 

There are other cc modules available; but I believe = one major deviation is that Linux can perform mechanisms like hystart; ACKi= ng every packet when the client detects slow start; perform pacing to achie= ve more uniform packet transmissions.

 

I think the next step would be to find out, at which= queue those packet discards are coming from (external switch? delay genera= tor? Vswitch? Eth stack inside the VM?)

 

Or alternatively, provide your ESX hypervisors with = vastly more link speed, to rule out any L2 induced packet drops - provided = your delay generator is not the source when momentarily overloaded.

 

> Is there a way to reduce this packet loss by fi= ne tuning some parameters w.r.t ring buffer or any other areas?

 

Finding where these arise (looking at queue and port= counters) would be the next step. But this is not really my specific area = of expertise beyond the high level, vendor independent observations.

 

Switching to other cc modules may give some more ins= ights. But again, I suspect that momentary (microsecond) burstiness of BSD = may be causing this significantly higher loss rate.

 

TCP RACK would be another option. That stack has pac= ing, more fine-grained timing, the RACK loss recovery mechanisms etc. Maybe= that helps reduce the observed packet drops by iperf, and consequently, yi= eld a higher overall throuhgput.

 

 

 

 

--_000_PH0PR05MB100642BD041192E6B7EBDBFE1FB2AAPH0PR05MB10064na_--