From owner-freebsd-stable@FreeBSD.ORG Fri Oct 1 23:02:34 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E3549106566B for ; Fri, 1 Oct 2010 23:02:34 +0000 (UTC) (envelope-from dan@langille.org) Received: from schemailmta04.cingularme.com (schemailmta04.cingularme.com [209.183.37.58]) by mx1.freebsd.org (Postfix) with ESMTP id A22C48FC0A for ; Fri, 1 Oct 2010 23:02:34 +0000 (UTC) Received: from [10.113.171.20] (really [172.16.130.170]) by schemailmta05.cingularme.com (InterMail vM.6.01.04.00 201-2131-118-20041027) with ESMTP id <20101001224938.SMOZ2109.schemailmta05.cingularme.com@[10.113.171.20]>; Fri, 1 Oct 2010 17:49:38 -0500 References: <45cfd27021fb93f9b0877a1596089776.squirrel@nyi.unixathome.org> In-Reply-To: Mime-Version: 1.0 (iPhone Mail 8B117) Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=us-ascii Message-Id: <4C511EF8-591C-4BB9-B7AA-30D5C3DDC0FF@langille.org> X-Mailer: iPhone Mail (8B117) From: Dan Langille Date: Fri, 1 Oct 2010 18:49:17 -0400 To: Artem Belevich X-Cloudmark-Analysis: v=1.0 c=1 a=QmPrZqdC1joA:10 a=kj9zAlcOel0A:10 a=MBIA0dz7AAAA:8 a=6I5d2MoRAAAA:8 a=7pBNprJxB9wivWM7RxUA:9 a=h-A2-4QYhiGPzBT9tXYA:7 a=Qu1MvQ0wV3ggC2VjVtHLjQMgLzIA:4 a=CjuIK1q_8ugA:10 a=9ZX6gfnTnSoA:10 a=SV7veod9ZcQA:10 Cc: "freebsd-stable@freebsd.org" Subject: Re: zfs send/receive: is this slow? X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Oct 2010 23:02:35 -0000 FYI: this is all on the same box. -- Dan Langille http://langille.org/ On Oct 1, 2010, at 5:56 PM, Artem Belevich wrote: > Hmm. It did help me a lot when I was replicating ~2TB worth of data > over GigE. Without mbuffer things were roughly in the ballpark of your > numbers. With mbuffer I've got around 100MB/s. > > Assuming that you have two boxes connected via ethernet, it would be > good to check that nobody generates PAUSE frames. Some time back I've > discovered that el-cheapo switch I've been using for some reason could > not keep up with traffic bursts and generated tons of PAUSE frames > that severely limited throughput. > > If you're using Intel adapters, check xon/xoff counters in "sysctl > dev.em.0.mac_stats". If you see them increasing, that may explain slow > speed. > If you have a switch between your boxes, try bypassing it and connect > boxes directly. > > --Artem > > > > On Fri, Oct 1, 2010 at 11:51 AM, Dan Langille wrote: >> >> On Wed, September 29, 2010 2:04 pm, Dan Langille wrote: >>> $ zpool iostat 10 >>> capacity operations bandwidth >>> pool used avail read write read write >>> ---------- ----- ----- ----- ----- ----- ----- >>> storage 7.67T 5.02T 358 38 43.1M 1.96M >>> storage 7.67T 5.02T 317 475 39.4M 30.9M >>> storage 7.67T 5.02T 357 533 44.3M 34.4M >>> storage 7.67T 5.02T 371 556 46.0M 35.8M >>> storage 7.67T 5.02T 313 521 38.9M 28.7M >>> storage 7.67T 5.02T 309 457 38.4M 30.4M >>> storage 7.67T 5.02T 388 589 48.2M 37.8M >>> storage 7.67T 5.02T 377 581 46.8M 36.5M >>> storage 7.67T 5.02T 310 559 38.4M 30.4M >>> storage 7.67T 5.02T 430 611 53.4M 41.3M >> >> Now that I'm using mbuffer: >> >> $ zpool iostat 10 >> capacity operations bandwidth >> pool used avail read write read write >> ---------- ----- ----- ----- ----- ----- ----- >> storage 9.96T 2.73T 2.01K 131 151M 6.72M >> storage 9.96T 2.73T 615 515 76.3M 33.5M >> storage 9.96T 2.73T 360 492 44.7M 33.7M >> storage 9.96T 2.73T 388 554 48.3M 38.4M >> storage 9.96T 2.73T 403 562 50.1M 39.6M >> storage 9.96T 2.73T 313 468 38.9M 28.0M >> storage 9.96T 2.73T 462 677 57.3M 22.4M >> storage 9.96T 2.73T 383 581 47.5M 21.6M >> storage 9.96T 2.72T 142 571 17.7M 15.4M >> storage 9.96T 2.72T 80 598 10.0M 18.8M >> storage 9.96T 2.72T 718 503 89.1M 13.6M >> storage 9.96T 2.72T 594 517 73.8M 14.1M >> storage 9.96T 2.72T 367 528 45.6M 15.1M >> storage 9.96T 2.72T 338 520 41.9M 16.4M >> storage 9.96T 2.72T 348 499 43.3M 21.5M >> storage 9.96T 2.72T 398 553 49.4M 14.4M >> storage 9.96T 2.72T 346 481 43.0M 6.78M >> >> If anything, it's slower. >> >> The above was without -s 128. The following used that setting: >> >> $ zpool iostat 10 >> capacity operations bandwidth >> pool used avail read write read write >> ---------- ----- ----- ----- ----- ----- ----- >> storage 9.78T 2.91T 1.98K 137 149M 6.92M >> storage 9.78T 2.91T 761 577 94.4M 42.6M >> storage 9.78T 2.91T 462 411 57.4M 24.6M >> storage 9.78T 2.91T 492 497 61.1M 27.6M >> storage 9.78T 2.91T 632 446 78.5M 22.5M >> storage 9.78T 2.91T 554 414 68.7M 21.8M >> storage 9.78T 2.91T 459 434 57.0M 31.4M >> storage 9.78T 2.91T 398 570 49.4M 32.7M >> storage 9.78T 2.91T 338 495 41.9M 26.5M >> storage 9.78T 2.91T 358 526 44.5M 33.3M >> storage 9.78T 2.91T 385 555 47.8M 39.8M >> storage 9.78T 2.91T 271 453 33.6M 23.3M >> storage 9.78T 2.91T 270 456 33.5M 28.8M >> >> >> _______________________________________________ >> freebsd-stable@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-stable >> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" >> >