From owner-freebsd-questions@FreeBSD.ORG Tue Apr 2 11:44:09 2013 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 06184F2B for ; Tue, 2 Apr 2013 11:44:09 +0000 (UTC) (envelope-from joar.jegleim@gmail.com) Received: from mail-wg0-f44.google.com (mail-wg0-f44.google.com [74.125.82.44]) by mx1.freebsd.org (Postfix) with ESMTP id 9A7CFD0B for ; Tue, 2 Apr 2013 11:44:08 +0000 (UTC) Received: by mail-wg0-f44.google.com with SMTP id z12so335332wgg.35 for ; Tue, 02 Apr 2013 04:44:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=aUatLMza9e9097n2LcRe22hJRZX0kmJDeZcexzltlXI=; b=iGFHIVhiDPOoWowVe9PqUPF6lrUknSST0K/G88fQwlpyTjGKRExHNeMAfx8BKQXQrL zKeMLtJtDftxmTOu84ztiW26sCEuVyj1+t+IKPpp22VbTXW24BRPDPHGMvfMFktjzbTP QcBHA0YW/FLfChVW8GldVbxu7ultPigJuy0dFIIvVpEbWZlre3WR4mzhA3vdHE70DInv Y2NHSjHWO+BMBK+EW9RdjOZQVNyCF9ENMRgO3+4WyrIwO9HzPk4CLXAflcS0U++865/w HgrYv6QUt8unUvXbBCuEvDi8YL36K7SPk3226DNcc4Yb8VwMCrw1eKScyUZ3E4P1NmvE sozQ== MIME-Version: 1.0 X-Received: by 10.194.109.35 with SMTP id hp3mr9122783wjb.15.1364903047330; Tue, 02 Apr 2013 04:44:07 -0700 (PDT) Received: by 10.216.34.9 with HTTP; Tue, 2 Apr 2013 04:44:07 -0700 (PDT) Date: Tue, 2 Apr 2013 13:44:07 +0200 Message-ID: Subject: Regarding zfs send / receive From: Joar Jegleim To: freebsd-questions Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Apr 2013 11:44:09 -0000 Hi FreeBSD ! So I've got this setup where we have a storage server delivering about 2 million jpeg's as a backend for a website ( it's ~1TB of data) The storage server is running zfs and every 15 minutes it does a zfs send to a 'slave', and our proxy will fail over to the slave if the main storage server goes down . I've got this script that initially zfs send's a whole zfs volume, and for every send after that only sends the diff . I've had increasing problems on the 'slave', it seem to grind to a halt for anything between 5-20 seconds after every zfs receive . I've had a couple go's on trying to solve / figure out what's happening without luck, and this 3rd time I've invested even more time on the problem . To sum it up: -Server was initially on 8.2-RELEASE -I've set some sysctl variables such as: # 16GB arc_max ( server got 30GB of ram, but had a couple 'freeze' situations, suspect zfs.arc ate too much memory) vfs.zfs.arc_max=17179869184 # 8.2 default to 30 here, setting it to 5 which is default from 8.3 and onwards vfs.zfs.txg.timeout="5" # Set TXG write limit to a lower threshold. This helps "level out" # the throughput rate (see "zpool iostat"). A value of 256MB works well # for systems with 4 GB of RAM, while 1 GB works well for us w/ 8 GB on # disks which have 64 MB cache. <
> # NOTE: in