From owner-freebsd-fs@freebsd.org Thu May 18 21:53:29 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D19ACD6FEE1 for ; Thu, 18 May 2017 21:53:29 +0000 (UTC) (envelope-from nonesuch@longcount.org) Received: from mail-qk0-x22c.google.com (mail-qk0-x22c.google.com [IPv6:2607:f8b0:400d:c09::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8E028C for ; Thu, 18 May 2017 21:53:29 +0000 (UTC) (envelope-from nonesuch@longcount.org) Received: by mail-qk0-x22c.google.com with SMTP id y201so47835566qka.0 for ; Thu, 18 May 2017 14:53:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=longcount-org.20150623.gappssmtp.com; s=20150623; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=xgVicuEo9LZrd02+g68tdQ5+zgr0w/P0wqczQ1C+yt0=; b=tGXwKwh3VMGp3tO2LRd7xWTzGK5F0UUXt+o5TI3RU8x7GJhSpsPUUczBlCl+b0jlBK d5924kXwOP/j+IMTdJfhr20PvjnUrTetBvbIAvK7L27Z+GHqt3t/ll2zAnJgwRu/8RHh p9Cbrs8RhBdkVHtaDlk1vs4c9vB0sbCIlxzSRtsW6vrQdDN/BNbLER8O4xMkP/ITHVmv AMBtFY2jY30p4WxNRK56DHc3R6QFfz3L4lUqGn3IJ82/7xkUJZJwdtDDY0+AWiMC14Ph SU9OzYN8V7TYsP6QgmLQ3L6b3/XZXiyvBlz0LzAPxl/dCmUB2inCjOelKnec+dk893iT 6M4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=xgVicuEo9LZrd02+g68tdQ5+zgr0w/P0wqczQ1C+yt0=; b=Ycq4rLg2A0TuSoRmnPWdK0aiq8jVjT9Uq+VR3gJj06cG3w2GZqne5YzV1oeZ+BCDt/ zxKZgutJ5e6yI9snkqQz8KSM0B/GK4rKDfSQRiIoZEoinOESCUeuQrt/wSgtLaJ3dqY0 eIFkUJs8zWvhWTAZtkS45jsAUhgY4AAKwskNW01VhNs3+T92gPyL9yXeANsGSacB+4n3 NZ1vFl9bNTChIq7+1ImmKDHBKG3ho32ejJV2YFMElSxcb1NrWawLETycgXYMtfQ10Tx9 LRYer+iuuRZfPY2C4REqYxQ/mlfmINx++z8MsZjdhf7g3PVnnBZPewNIZ/FYj+VkAYwr /cnQ== X-Gm-Message-State: AODbwcD+5IfyIYSSlrqAG5uTZ8JWsf790FC1wLFKtiBW1GDravHBtwvO ZlCtWd0Nem87Ic9lKjct2w== X-Received: by 10.55.167.6 with SMTP id q6mr5646504qke.134.1495144408199; Thu, 18 May 2017 14:53:28 -0700 (PDT) Received: from ?IPv6:2600:1017:b82c:11a0:9c2f:5f39:87d1:8ec9? ([2600:1017:b82c:11a0:9c2f:5f39:87d1:8ec9]) by smtp.gmail.com with ESMTPSA id n35sm4617712qtc.55.2017.05.18.14.53.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 18 May 2017 14:53:26 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (1.0) Subject: Re: Different size after zfs send receive From: Mark Saad X-Mailer: iPhone Mail (14E304) In-Reply-To: Date: Thu, 18 May 2017 17:53:23 -0400 Cc: "freebsd-fs@freebsd.org" Content-Transfer-Encoding: quoted-printable Message-Id: <58A6B47B-2992-4BB8-A80E-44F74EAE93B2@longcount.org> References: To: kc atgb X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 May 2017 21:53:29 -0000 Hi kc=20 This has to do with how data blocks are replicated when stored on a raidzN= . Moving them to a mirror removes replicated blocks . This is way over simp= lified but imagine you store a file of 10gb on a raidz1 . The system splits t= he file into smaller chunks; of say 1mb , and stores one extra chunk for eac= h chunk that us striped around the raidz1 . Storing on a mirror is just writ= e the chunk once on each disk . However with a mirror since you only see 1/2= the number of disks you never see the extra chunks in the used field .=20 Hope this helps .=20 --- Mark Saad | nonesuch@longcount.org > On May 18, 2017, at 3:36 PM, kc atgb w= rote: >=20 > Hi, >=20 > Some days ago I had a need to backup my current pool and restore it after p= ool destroy and create.=20 >=20 > The pool in my home server is a raidz1 with 4 disks. To backup this pool I= grabbed two 4TB disks (single disk pools) to have a double backup (I have j= ust one > sata port left I can use to plug a disk).=20 >=20 > The whole process of backup and restore went well as I can say. But lookin= g at the size reported by zfs list make me a little bit curious.=20 >=20 > storage/datas/ISO 3= 5420869824 381747995136 35420726976 /datas/ISO > storage/datas/ISO@backup_send = 142848 - 35420726976 - > storage/datas/ISO@backup_sync = 0 - 35420726976 - >=20 > b1/datas/ISO 354393= 08800 2176300351488 35439210496 /datas/ISO > b1/datas/ISO@backup_send 9= 8304 - 35439210496 - > b1/datas/ISO@backup_sync = 0 - 35439210496 - >=20 > b2/datas/ISO 354393= 08800 2176298991616 35439210496 /datas/ISO > b2/datas/ISO@backup_send 9= 8304 - 35439210496 - > b2/datas/ISO@backup_sync = 0 - 35439210496 - >=20 > storage/datas/ISO 3= 5421024576 381303470016 35420715072 /datas/ISO > storage/datas/ISO@backup_send = 142848 - 35420715072 - > storage/datas/ISO@backup_sync = 11904 - 35420715072 - >=20 >=20 > storage/usrobj 5= 819085888 381747995136 5816276544 legacy > storage/usrobj@create = 166656 - 214272 - > storage/usrobj@backup_send = 2642688 - 5816228928 - > storage/usrobj@backup_sync = 0 - 5816276544 - >=20 > b1/usrobj 56750= 81728 2176300351488 5673222144 legacy > b1/usrobj@create 1= 14688 - 147456 - > b1/usrobj@backup_send 17= 44896 - 5673222144 - > b1/usrobj@backup_sync = 0 - 5673222144 - >=20 > b2/usrobj 56751= 88224 2176298991616 5673328640 legacy > b2/usrobj@create 1= 14688 - 147456 - > b2/usrobj@backup_send 17= 44896 - 5673328640 - > b2/usrobj@backup_sync = 0 - 5673328640 - >=20 > storage/usrobj 5= 820359616 381303470016 5815098048 legacy > storage/usrobj@create = 166656 - 214272 - > storage/usrobj@backup_send = 2535552 - 5815098048 - > storage/usrobj@backup_sync = 11904 - 5815098048 - >=20 > As you can see the numbers are different for each pool (the initial raidz1= , backup1 disk, backup2 disk and new raidz1). I mean in the USED column. I h= ave > nearly all my datasets in the same situation (those with fixed data that h= ave not changed between the beginning of the process and now). backup1 and b= ackup2 > are identical disks with exactly the same configurations and have differen= t numbers. I used the same commands for all my transfers except the name of t= he > destination pool.=20 >=20 > So, I wonder what can cause these differences ? Is it something I have to w= orry about ? Can I consider this as a normal behavior ?=20 >=20 > Thanks for your enlightments, > K. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"