From owner-freebsd-current@freebsd.org Mon Oct 3 11:22:52 2016 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E7EBCAC646A for ; Mon, 3 Oct 2016 11:22:52 +0000 (UTC) (envelope-from junchoon@dec.sakura.ne.jp) Received: from dec.sakura.ne.jp (dec.sakura.ne.jp [210.188.226.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B1BE4DB0 for ; Mon, 3 Oct 2016 11:22:52 +0000 (UTC) (envelope-from junchoon@dec.sakura.ne.jp) Received: from fortune.joker.local (123-48-23-227.dz.commufa.jp [123.48.23.227]) (authenticated bits=0) by dec.sakura.ne.jp (8.15.2/8.15.2/[SAKURA-WEB]/20080708) with ESMTPA id u93AflZm023295; Mon, 3 Oct 2016 19:41:48 +0900 (JST) (envelope-from junchoon@dec.sakura.ne.jp) Date: Mon, 3 Oct 2016 19:41:47 +0900 From: Tomoaki AOKI To: freebsd-current@freebsd.org Cc: ohartman@zedat.fu-berlin.de Subject: Re: ZFS - Abyssal slow on copying Message-Id: <20161003194147.9934fbbebc1c4cf9d3c5dc83@dec.sakura.ne.jp> In-Reply-To: <28DF6F19-A97F-4029-9D55-77E14B38B45D@cs.huji.ac.il> References: <20161002212504.2d782002.ohartman@zedat.fu-berlin.de> <20161003113050.0f1b09bd.ohartman@zedat.fu-berlin.de> <28DF6F19-A97F-4029-9D55-77E14B38B45D@cs.huji.ac.il> Organization: Junchoon corps X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.29; amd64-portbld-freebsd11.0) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Oct 2016 11:22:53 -0000 On Mon, 3 Oct 2016 12:35:08 +0300 Daniel Braniss wrote: > > > On 3 Oct 2016, at 12:30 PM, O. Hartmann wrote: > > > > Am Sun, 2 Oct 2016 15:30:41 -0400 > > Allan Jude > schrieb: > > > >> On 2016-10-02 15:25, O. Hartmann wrote: > >>> > >>> Running 12-CURRENT (FreeBSD 12.0-CURRENT #32 r306579: Sun Oct 2 09:34:50 CEST 2016 > >>> ), I have a NanoBSD setup which creates an image for a router device. > >>> > >>> The problem I face is related to ZFS. The system has a system's SSD (Samsung 850 Pro, > >>> 256GB) which has an UFS filesystem. Aditionally, I have also a backup and a data HDD, > >>> both WD, one 3 TB WD RED Pro, on 4 TB WD RED (the backup device). Both the sources for > >>> the NanoBSD and the object tree as well as the NANO_WORLDDIR are residing on the 3 TB > >>> data drive. > >>> > >>> The box itself has 8 GB RAM. When it comes to create the memory disk, which is ~ 1,3 > >>> GB in size, the NanoBSD script starts creating the memory disk and then installing > >>> world into this memory disk. And this part is a kind of abyssal in terms of the speed. > >>> > >>> The drive sounds like hell, the heads are moving rapidly. The copy speed is incredibly > >>> slow compared to another box I usually use in the lab with UFS filesystem only > >>> (different type of HDD). > >>> > >>> The whole stuff the nanbsd is installed from and to is on a separate ZFS partition, > >>> but in the same pool as everything else. When I first setup the new partitions, I > >>> switched on deduplication, but I quickly deactivated it, because it had a tremendous > >>> impact on the working speed and memory consumption on that box. But something seems > >>> not right since then - as I initially described, the copy/initialisation > >>> speed/bandwith is abyssal. Well, I also fear that I did something wrong when I firt > >>> initialised the HDD - there is this 125bytes/4k block discussion and I do not know > >>> how to check whether I'm affected to that or not (or even causing the problems) and > >>> how to check whether DEDUPLICATION is definitely OFF (apart from the usual stuff list > >>> features via "zfs get all"). > >>> > >>> As an example: the nanbosd script takes ~ 1 minute to copy /boot/loader from source to > >>> memory disk and the HDD makes sounds like hell and close to loosing the r/w heads. On > >>> other boxes this task is done in a blink of an eye ... > >>> > >>> Thanks for your patience, > >>> > >>> Regards, > >>> oh > >>> > >> > >> Turning deduplication off, only stops new blocks from being > >> deduplicated. Any data written while deduplication was on, are still > >> deduplicated. You would need to zfs send | zfs recv, or > >> backup/destroy/restore to get the data back to normal. > >> > >> If the drive is making that much noise, have you considered that the > >> drive might be failing? > >> > > > > Hello. > > > > Might there be any hint I can investigate on that ZFS partition showing me that the > > particular partition is still doing deduplication? If I wouldn't know that I switch > > dedup on and later off, I would blame the OS instead. So, for further forensik analysis > > in the future, it would be nice to know how to look at it - if it is doable via simple > > checking the features of the ZFS partition ... > > > > Thanks, > > oh > > not really an answer, but zpool has a nice command: history, it sometimes helps to find what and when > nfs commands where given. > > danny Looks helpful to confirm 'when and how changed', but the output contains 'all' changes such as snapshots. Would be hard to determine what is the current configuration. To determine what is the current settings, `zfs get all zroot` for case that pool name is zroot. Pool name can include child dataset like zroot/local. To list all children, use `zfs get -r all zroot` instead. To focus on dedup `zfs get -p dedup zroot` can be better. If you want children datasets, '-r' shall be specified before '-p'. Unfortunately, this (sub)command cannot show change logs. HTH. -- Tomoaki AOKI