From owner-freebsd-fs@freebsd.org Fri May 19 07:34:40 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8C50BD73ECC for ; Fri, 19 May 2017 07:34:40 +0000 (UTC) (envelope-from kisscoolandthegangbang@hotmail.fr) Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-oln040092064032.outbound.protection.outlook.com [40.92.64.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "Microsoft IT SSL SHA2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id F198D907 for ; Fri, 19 May 2017 07:34:39 +0000 (UTC) (envelope-from kisscoolandthegangbang@hotmail.fr) Received: from DB5EUR01FT023.eop-EUR01.prod.protection.outlook.com (10.152.4.59) by DB5EUR01HT079.eop-EUR01.prod.protection.outlook.com (10.152.4.189) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.1075.5; Fri, 19 May 2017 07:34:37 +0000 Received: from AMSPR05MB148.eurprd05.prod.outlook.com (10.152.4.51) by DB5EUR01FT023.mail.protection.outlook.com (10.152.4.233) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1075.5 via Frontend Transport; Fri, 19 May 2017 07:34:37 +0000 Received: from AMSPR05MB148.eurprd05.prod.outlook.com ([fe80::c083:5c72:5c2c:fb06]) by AMSPR05MB148.eurprd05.prod.outlook.com ([fe80::c083:5c72:5c2c:fb06%27]) with mapi id 15.01.1084.022; Fri, 19 May 2017 07:34:37 +0000 From: kc atgb To: "freebsd-fs@freebsd.org" Subject: Re: Different size after zfs send receive Thread-Topic: Different size after zfs send receive Thread-Index: AQHS0A4bB1aDgtJ4x0WmR87yTfeQcqH6ojqAgACiToA= Date: Fri, 19 May 2017 07:34:36 +0000 Message-ID: References: <58A6B47B-2992-4BB8-A80E-44F74EAE93B2@longcount.org> In-Reply-To: <58A6B47B-2992-4BB8-A80E-44F74EAE93B2@longcount.org> Accept-Language: fr-FR, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: freebsd.org; dkim=none (message not signed) header.d=none;freebsd.org; dmarc=none action=none header.from=hotmail.fr; x-incomingtopheadermarker: OriginalChecksum:74D0A6E1945EC113801581F03104AD52E119E921192FBC85A483102376175AE2; UpperCasedChecksum:C7FB2D1C75FF6EAF7BE8E0F939E3F47C9CB928BDD8D271A4A485FD02C7C0324D; SizeAsReceived:8151; Count:42 x-ms-exchange-messagesentrepresentingtype: 1 x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; DB5EUR01HT079; 5:+XwgGiZBOfvl1zu2o/5VwsVlMwzq+/XbqzSCymO9aAiOACjHa/WXmOfdEk4mynAS8z/9zdmO79rRDMO4lpOraKZh7vtY3fDuxl0f6Q53e3gOWgjqjhCBJvgm8QrLGaVD9YJjARVCnDn856kuh4X17DQe5ON81qEYnqIJke/Yge0=; 24:VKjYTlMViyHhRQ0gcKhJogZSwz0sN1g7P0sy91ADF27kYFOnvDrUjajeGf8LdUcHP0/LwELno5njy2FvSHLSW74T/21KyowQ4RVc7cQLVA4=; 7:PUThJ27z7GfyoJ7vdul8dRPqqBQxcCMx8PraDPPCRzt/NXXjTHLy6O+frt5jC4Dde4kXILBPXnSADVcRh/spXKGf6SubFGHwElxK4h6sQGv7ITlG/DzVgAhG07l8/3o2sGf7eKtjQoZF97uid5t76/R9sDVPFS0uPKeFp/gD5hQYbSL/BPeJftw8KEOglrNSf/mZxnvhOz5o1HVYPe1WvW0sEb4NTJ/OljSDtVtD60Ukn/0J9lG90B3XcO4ozl9ny2p6MnASw6xthX5urko7GDOHMuAjGVfZOeWPbimJy9DTsxOYg1F9HMnufbvgsmZN x-incomingheadercount: 42 x-eopattributedmessage: 0 x-forefront-antispam-report: EFV:NLI; SFV:NSPM; SFS:(7070007)(98901004); DIR:OUT; SFP:1901; SCL:1; SRVR:DB5EUR01HT079; H:AMSPR05MB148.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; x-ms-office365-filtering-correlation-id: cdd16c15-ee89-4958-43dd-08d49e897fcf x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(201702061074)(5061506573)(5061507331)(1603103135)(2017031320274)(2017031324274)(2017031323274)(2017031322274)(1601125374)(1603101448)(1701031045); SRVR:DB5EUR01HT079; x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(444000031); SRVR:DB5EUR01HT079; BCL:0; PCL:0; RULEID:; SRVR:DB5EUR01HT079; x-forefront-prvs: 031257FE13 spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="iso-8859-1" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: outlook.com X-MS-Exchange-CrossTenant-originalarrivaltime: 19 May 2017 07:34:36.9980 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Internet X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5EUR01HT079 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 May 2017 07:34:40 -0000 Le Thu, 18 May 2017 21:53:23 +0000, Mark Saad a =E9crit : Hi, I see what you are talking about I thing. You refer to "raid" splitting, ri= ght ? In this case this is something in the "internals" of the raid system.= Isn't zfs list suppose to report raw data sizes (without metadata, checksums, ...= ) ?=20 I don't really think it is related to what I'm refering.=20 Look, for the same pool configuration (one 4 disks raidz1 vdev) with the sa= me disks and the same data, it reports for storage/usrobj 5819085888 before backup and 5820359616 after restore to the recreated pool.=20 Even for pools with one single disk vdev (again same disks, same configurat= ion, same data as above...) for the same dataset 5675081728 in backup1 disk and 5675188224 in backup2 The difference isn't so big but the numbers differ and I would imagine numb= ers to be the same.=20 K. > Hi kc=20 > This has to do with how data blocks are replicated when stored on a rai= dzN . Moving them to a mirror removes replicated blocks . This is way over > simplified but imagine you store a file of 10gb on a raidz1 . The system = splits the file into smaller chunks; of say 1mb , and stores one extra chun= k for > each chunk that us striped around the raidz1 . Storing on a mirror is jus= t write the chunk once on each disk . However with a mirror since you only = see 1/2 > the number of disks you never see the extra chunks in the used field .=20 >=20 > Hope this helps .=20 >=20 > --- > Mark Saad | nonesuch@longcount.org >=20 > > On May 18, 2017, at 3:36 PM, kc atgb wrote: > >=20 > > Hi, > >=20 > > Some days ago I had a need to backup my current pool and restore it aft= er pool destroy and create.=20 > >=20 > > The pool in my home server is a raidz1 with 4 disks. To backup this poo= l I grabbed two 4TB disks (single disk pools) to have a double backup (I ha= ve just > > one sata port left I can use to plug a disk).=20 > >=20 > > The whole process of backup and restore went well as I can say. But loo= king at the size reported by zfs list make me a little bit curious.=20 > >=20 > > storage/datas/ISO = 35420869824 381747995136 35420726976 /datas/ISO > > storage/datas/ISO@backup_send = 142848 - 35420726976 - > > storage/datas/ISO@backup_sync = 0 - 35420726976 - > >=20 > > b1/datas/ISO 354= 39308800 2176300351488 35439210496 /datas/ISO > > b1/datas/ISO@backup_send = 98304 - 35439210496 - > > b1/datas/ISO@backup_sync = 0 - 35439210496 - > >=20 > > b2/datas/ISO 354= 39308800 2176298991616 35439210496 /datas/ISO > > b2/datas/ISO@backup_send = 98304 - 35439210496 - > > b2/datas/ISO@backup_sync = 0 - 35439210496 - > >=20 > > storage/datas/ISO = 35421024576 381303470016 35420715072 /datas/ISO > > storage/datas/ISO@backup_send = 142848 - 35420715072 - > > storage/datas/ISO@backup_sync = 11904 - 35420715072 - > >=20 > >=20 > > storage/usrobj = 5819085888 381747995136 5816276544 legacy > > storage/usrobj@create = 166656 - 214272 - > > storage/usrobj@backup_send = 2642688 - 5816228928 - > > storage/usrobj@backup_sync = 0 - 5816276544 - > >=20 > > b1/usrobj 56= 75081728 2176300351488 5673222144 legacy > > b1/usrobj@create = 114688 - 147456 - > > b1/usrobj@backup_send = 1744896 - 5673222144 - > > b1/usrobj@backup_sync = 0 - 5673222144 - > >=20 > > b2/usrobj 56= 75188224 2176298991616 5673328640 legacy > > b2/usrobj@create = 114688 - 147456 - > > b2/usrobj@backup_send = 1744896 - 5673328640 - > > b2/usrobj@backup_sync = 0 - 5673328640 - > >=20 > > storage/usrobj = 5820359616 381303470016 5815098048 legacy > > storage/usrobj@create = 166656 - 214272 - > > storage/usrobj@backup_send = 2535552 - 5815098048 - > > storage/usrobj@backup_sync = 11904 - 5815098048 - > >=20 > > As you can see the numbers are different for each pool (the initial rai= dz1, backup1 disk, backup2 disk and new raidz1). I mean in the USED column.= I have > > nearly all my datasets in the same situation (those with fixed data tha= t have not changed between the beginning of the process and now). backup1 a= nd backup2 > > are identical disks with exactly the same configurations and have diffe= rent numbers. I used the same commands for all my transfers except the name= of the > > destination pool.=20 > >=20 > > So, I wonder what can cause these differences ? Is it something I have = to worry about ? Can I consider this as a normal behavior ?=20 > >=20 > > Thanks for your enlightments, > > K. > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >=20