From owner-freebsd-fs@FreeBSD.ORG Mon Apr 8 07:18:52 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 18A6AE1 for ; Mon, 8 Apr 2013 07:18:52 +0000 (UTC) (envelope-from joar.jegleim@gmail.com) Received: from mail-wi0-x235.google.com (mail-wi0-x235.google.com [IPv6:2a00:1450:400c:c05::235]) by mx1.freebsd.org (Postfix) with ESMTP id A745F681 for ; Mon, 8 Apr 2013 07:18:51 +0000 (UTC) Received: by mail-wi0-f181.google.com with SMTP id hj8so2235814wib.2 for ; Mon, 08 Apr 2013 00:18:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=PfXhymztOwSTAYDSN6LpWySGz7hPe6Cmsxwm9GUUkDA=; b=PyMXdVhYGTombrYcony6+ltuHZOGPXbSuJ59z5IjUhFHQpGTXl/xWxbijkbDpusauM uKrp8TOsglG0CBkwVLTKHTKJjabJED+Z+m4KA5Sa6+SzPzxMcaAOHk1YCGC9ER7kZNKm XECiRt222cTqFmiIhYm+eVOnxBw57fY9HsGf2dyxgXBLYyMHyNTYSqepn74ZTIvCxJrE py75pnUKvEUTnOPzxmtE/V6HjouGIjNGgdVxY7dtS9TeRQ1K0iNh2HDCUqdX/grc2TfY X+NWbE2XLJMuSNsrEAyqP16YPaCoH7W+JNDASV73Wnf7pzI/NutUWdjlL2M1ljrFr09i 7ngw== MIME-Version: 1.0 X-Received: by 10.194.88.138 with SMTP id bg10mr29558429wjb.13.1365405530654; Mon, 08 Apr 2013 00:18:50 -0700 (PDT) Received: by 10.216.34.9 with HTTP; Mon, 8 Apr 2013 00:18:50 -0700 (PDT) In-Reply-To: References: <8B0FFF01-B8CC-41C0-B0A2-58046EA4E998@my.gd> <515EB744.5000607@brockmann-consult.de> Date: Mon, 8 Apr 2013 09:18:50 +0200 Message-ID: Subject: Re: Regarding regular zfs From: Joar Jegleim To: Ronald Klop Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Apr 2013 07:18:52 -0000 the rsync was running from the live system. As I wrote earlier, the problem seem to only occur while the backup server is rsync'ing from the slave (zfs receiving side), so I was actually trying to figure out if this was to be expected (as in zfs sync, where the receiving end get a diff and roll 'back' to version = latest snapshot from 'master') with a setup with +1TB data and +2million files . On 5 April 2013 16:07, Ronald Klop wrote: > On Fri, 05 Apr 2013 15:02:12 +0200, Joar Jegleim > wrote: > > You make some interesting points . >> I don't _think_ the script 'causes more than 1 zfs write at a time, and >> I'm >> sure 'nothing else' is doing that neither . But I'm gonna check that out >> because it does sound like a logical explanation. >> I'm wondering if the rsync from the receiving server (that is: the backup >> server is doing rsync from the zfs receive server) could 'cause the same >> problem, it's only reading though ... >> >> >> >> > Do you run the rsync from a snapshot or from the 'live' filesystem? The > live one changes during zfs receive. I don't know if that has anything to > do with your problem, but rsync from a snapshot gives a consistent backup > anyway. > > BTW: It is probably more simple for you to test if the rsync is related to > the problem, than for other people to theorize about it here. > > Ronald. > -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ----------------------