From owner-freebsd-stable@FreeBSD.ORG Sat Oct 2 01:32:48 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8D790106564A for ; Sat, 2 Oct 2010 01:32:48 +0000 (UTC) (envelope-from dan@langille.org) Received: from nyi.unixathome.org (nyi.unixathome.org [64.147.113.42]) by mx1.freebsd.org (Postfix) with ESMTP id 51B4F8FC0A for ; Sat, 2 Oct 2010 01:32:47 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by nyi.unixathome.org (Postfix) with ESMTP id 7EFCC509A8; Sat, 2 Oct 2010 02:32:46 +0100 (BST) X-Virus-Scanned: amavisd-new at unixathome.org Received: from nyi.unixathome.org ([127.0.0.1]) by localhost (nyi.unixathome.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id IbRvDlQGMV5q; Sat, 2 Oct 2010 02:32:46 +0100 (BST) Received: from smtp-auth.unixathome.org (smtp-auth.unixathome.org [10.4.7.7]) (Authenticated sender: hidden) by nyi.unixathome.org (Postfix) with ESMTPSA id 2BCE9508AD ; Sat, 2 Oct 2010 02:32:46 +0100 (BST) Message-ID: <4CA68BBD.6060601@langille.org> Date: Fri, 01 Oct 2010 21:32:45 -0400 From: Dan Langille Organization: The FreeBSD Diary User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.9) Gecko/20100915 Thunderbird/3.1.4 MIME-Version: 1.0 To: Artem Belevich , freebsd-stable References: <45cfd27021fb93f9b0877a1596089776.squirrel@nyi.unixathome.org> <4C511EF8-591C-4BB9-B7AA-30D5C3DDC0FF@langille.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Subject: Re: zfs send/receive: is this slow? X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Oct 2010 01:32:48 -0000 On 10/1/2010 7:00 PM, Artem Belevich wrote: > On Fri, Oct 1, 2010 at 3:49 PM, Dan Langille wrote: >> FYI: this is all on the same box. > > In one of the previous emails you've used this command line: >> # mbuffer -s 128k -m 1G -I 9090 | zfs receive > > You've used mbuffer in network client mode. I assumed that you did do > your transfer over network. > > If you're running send/receive locally just pipe the data through > mbuffer -- zfs send|mbuffer|zfs receive As soon as I opened this email I knew what it would say. # time zfs send storage/bacula@transfer | mbuffer | zfs receive storage/compressed/bacula-mbuffer in @ 197 MB/s, out @ 205 MB/s, 1749 MB total, buffer 0% full $ zpool iostat 10 10 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- storage 9.78T 2.91T 1.11K 336 92.0M 17.3M storage 9.78T 2.91T 769 436 95.5M 30.5M storage 9.78T 2.91T 797 853 98.9M 78.5M storage 9.78T 2.91T 865 962 107M 78.0M storage 9.78T 2.91T 828 881 103M 82.6M storage 9.78T 2.90T 1023 1.12K 127M 91.0M storage 9.78T 2.90T 1.01K 1.01K 128M 89.3M storage 9.79T 2.90T 962 1.08K 119M 89.1M storage 9.79T 2.90T 1.09K 1.25K 139M 67.8M Big difference. :) > > --Artem > >> >> -- >> Dan Langille >> http://langille.org/ >> >> >> On Oct 1, 2010, at 5:56 PM, Artem Belevich wrote: >> >>> Hmm. It did help me a lot when I was replicating ~2TB worth of data >>> over GigE. Without mbuffer things were roughly in the ballpark of your >>> numbers. With mbuffer I've got around 100MB/s. >>> >>> Assuming that you have two boxes connected via ethernet, it would be >>> good to check that nobody generates PAUSE frames. Some time back I've >>> discovered that el-cheapo switch I've been using for some reason could >>> not keep up with traffic bursts and generated tons of PAUSE frames >>> that severely limited throughput. >>> >>> If you're using Intel adapters, check xon/xoff counters in "sysctl >>> dev.em.0.mac_stats". If you see them increasing, that may explain slow >>> speed. >>> If you have a switch between your boxes, try bypassing it and connect >>> boxes directly. >>> >>> --Artem >>> >>> >>> >>> On Fri, Oct 1, 2010 at 11:51 AM, Dan Langille wrote: >>>> >>>> On Wed, September 29, 2010 2:04 pm, Dan Langille wrote: >>>>> $ zpool iostat 10 >>>>> capacity operations bandwidth >>>>> pool used avail read write read write >>>>> ---------- ----- ----- ----- ----- ----- ----- >>>>> storage 7.67T 5.02T 358 38 43.1M 1.96M >>>>> storage 7.67T 5.02T 317 475 39.4M 30.9M >>>>> storage 7.67T 5.02T 357 533 44.3M 34.4M >>>>> storage 7.67T 5.02T 371 556 46.0M 35.8M >>>>> storage 7.67T 5.02T 313 521 38.9M 28.7M >>>>> storage 7.67T 5.02T 309 457 38.4M 30.4M >>>>> storage 7.67T 5.02T 388 589 48.2M 37.8M >>>>> storage 7.67T 5.02T 377 581 46.8M 36.5M >>>>> storage 7.67T 5.02T 310 559 38.4M 30.4M >>>>> storage 7.67T 5.02T 430 611 53.4M 41.3M >>>> >>>> Now that I'm using mbuffer: >>>> >>>> $ zpool iostat 10 >>>> capacity operations bandwidth >>>> pool used avail read write read write >>>> ---------- ----- ----- ----- ----- ----- ----- >>>> storage 9.96T 2.73T 2.01K 131 151M 6.72M >>>> storage 9.96T 2.73T 615 515 76.3M 33.5M >>>> storage 9.96T 2.73T 360 492 44.7M 33.7M >>>> storage 9.96T 2.73T 388 554 48.3M 38.4M >>>> storage 9.96T 2.73T 403 562 50.1M 39.6M >>>> storage 9.96T 2.73T 313 468 38.9M 28.0M >>>> storage 9.96T 2.73T 462 677 57.3M 22.4M >>>> storage 9.96T 2.73T 383 581 47.5M 21.6M >>>> storage 9.96T 2.72T 142 571 17.7M 15.4M >>>> storage 9.96T 2.72T 80 598 10.0M 18.8M >>>> storage 9.96T 2.72T 718 503 89.1M 13.6M >>>> storage 9.96T 2.72T 594 517 73.8M 14.1M >>>> storage 9.96T 2.72T 367 528 45.6M 15.1M >>>> storage 9.96T 2.72T 338 520 41.9M 16.4M >>>> storage 9.96T 2.72T 348 499 43.3M 21.5M >>>> storage 9.96T 2.72T 398 553 49.4M 14.4M >>>> storage 9.96T 2.72T 346 481 43.0M 6.78M >>>> >>>> If anything, it's slower. >>>> >>>> The above was without -s 128. The following used that setting: >>>> >>>> $ zpool iostat 10 >>>> capacity operations bandwidth >>>> pool used avail read write read write >>>> ---------- ----- ----- ----- ----- ----- ----- >>>> storage 9.78T 2.91T 1.98K 137 149M 6.92M >>>> storage 9.78T 2.91T 761 577 94.4M 42.6M >>>> storage 9.78T 2.91T 462 411 57.4M 24.6M >>>> storage 9.78T 2.91T 492 497 61.1M 27.6M >>>> storage 9.78T 2.91T 632 446 78.5M 22.5M >>>> storage 9.78T 2.91T 554 414 68.7M 21.8M >>>> storage 9.78T 2.91T 459 434 57.0M 31.4M >>>> storage 9.78T 2.91T 398 570 49.4M 32.7M >>>> storage 9.78T 2.91T 338 495 41.9M 26.5M >>>> storage 9.78T 2.91T 358 526 44.5M 33.3M >>>> storage 9.78T 2.91T 385 555 47.8M 39.8M >>>> storage 9.78T 2.91T 271 453 33.6M 23.3M >>>> storage 9.78T 2.91T 270 456 33.5M 28.8M >>>> >>>> >>>> _______________________________________________ >>>> freebsd-stable@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable >>>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" >>>> >>> >> > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" > -- Dan Langille - http://langille.org/