From owner-freebsd-fs@freebsd.org Wed Apr 27 06:26:34 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id F0600B1E097 for ; Wed, 27 Apr 2016 06:26:34 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-wm0-x230.google.com (mail-wm0-x230.google.com [IPv6:2a00:1450:400c:c09::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7EA651815 for ; Wed, 27 Apr 2016 06:26:34 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: by mail-wm0-x230.google.com with SMTP id n129so12165590wmn.1 for ; Tue, 26 Apr 2016 23:26:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=dRsilN6Lfg2CQnCJKacxf/2Uhc8cA/xER/DplSxNsDY=; b=KEzbLaFKUlcDCrTyVO1zxMqXMl7cC7EOK8mJjGK5xYbx7/bZKB8Mcfpewj9EyPIK1F cxjJhzU4V3vuo5JpQPCI+7dZD56dKPTdkh+m+9HmhpROPRQZH27Bg5HEtExcnCWbvqQQ Vw39bs1Re02EsvbxYA1VRTyfNPkwe/rH851odAeB7IZEI/aNXxNVTKKji6uF1zqymesI 5ioUET0d6Q0Oc2qenKVC3Uomj+KJAxyxzqzE1ErFVqpGCBNQhNeTt2MFNzDYFq+nWu7+ 5kix6Ehw7TuC9Ut4xcuxoyeolmdCFZbrmJGyLqisWCKXmu3gB4J9BgroBUBz+2hMetUs scUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=dRsilN6Lfg2CQnCJKacxf/2Uhc8cA/xER/DplSxNsDY=; b=g6juE/3WfDIyCDAJ7Z8nV399lmHNV2sNZYQFD2dj1jDeOzWbdvfPxd6roVF+RCvYic lM6+9m2o123zzJtvxTIJm9K9ahscXXiH7b9cpjEaFoyN2UOt51nkDyin4RjPfAIr3CT3 JU/id79h6HwbzCd0EYMACTKmkk2p44AkfNBWUNz8JdEg+HGfjFn7q62WfjlZZoH5jdki iAEoU4IkTgw7XBSA3IvRXY/fhb8PhOLgGe4JBv5/f9KBa/1xVmCgzP16FxBoNUukb7Ux AuEGDbCSky8YwAT1/GYOWTV4B9pT9vHtLapUvs9M9kKAXisfKRG8/krJVBTK3K52LpsI aaIw== X-Gm-Message-State: AOPr4FVWAByfRSEOhdM4cLuDlaf+t/Uok6cUe9ilPGEwbrtEvdaSRzaXv6AgqgJgor1s5jNFUb158YIGn6Tb8A== MIME-Version: 1.0 X-Received: by 10.194.48.7 with SMTP id h7mr7776896wjn.81.1461738393186; Tue, 26 Apr 2016 23:26:33 -0700 (PDT) Received: by 10.28.26.17 with HTTP; Tue, 26 Apr 2016 23:26:33 -0700 (PDT) In-Reply-To: <571F7AED.2040500@quip.cz> References: <571F62AD.6080005@quip.cz> <571F687D.8040103@internetx.com> <571F6EA4.90800@quip.cz> <571F7AED.2040500@quip.cz> Date: Wed, 27 Apr 2016 07:26:33 +0100 Message-ID: Subject: Re: How to speed up slow zpool scrub? From: krad To: Miroslav Lachman <000.fbsd@quip.cz> Cc: jg@internetx.com, FreeBSD FS Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.21 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Apr 2016 06:26:35 -0000 thats a shame. I have used an ssd on usb before with success, as their quality is usually better than pen drives. On 26 April 2016 at 15:27, Miroslav Lachman <000.fbsd@quip.cz> wrote: > krad wrote on 04/26/2016 16:02: > >> Erk, i would try to move your system off those data disks as you have >> two pools competing for the disk spindles. This is never ideal. You can >> by all means backup your os to those data pools but keep them on >> separate physical mediums. A couple of small SSD would do the trick >> nicely and could probably be added with no down time. You would probably >> want to find a suitable window though to make sure the box reboots >> nicely though. >> > > The system pool is really small - only 15GB and scrub is done relatively > fast. This machine cannot handle additional disks so I cannot move system > to other devices anyway. I tried system on USB flashdisk (read only) in the > past but it was slow and USB disk broke early. > > On 26 April 2016 at 14:35, Miroslav Lachman <000.fbsd@quip.cz >> > wrote: >> >> InterNetX - Juergen Gotteswinter wrote on 04/26/2016 15:09: >> >> to speed up the scrub itself you can try >> >> sysctl -w vfs.zfs.scrub_delay = 4 (default, 0 means higher prio) >> >> >> I will try it in the idle times >> >> but be careful as this can cause a serious performance impact, >> the value >> can be changed on the fly >> >> your pool is raidz, mirror ? dedup is hopefully disabled? >> >> >> I forgot to mention it. Disks are partitioned to four partitions: >> >> # gpart show -l ada0 >> => 34 7814037101 ada0 GPT (3.6T) >> 34 6 - free - (3.0K) >> 40 1024 1 boot0 (512K) >> 1064 10485760 2 swap0 (5.0G) >> 10486824 31457280 3 disk0sys (15G) >> 41944104 7769948160 4 disk0tank0 (3.6T) >> 7811892264 2144871 - free - (1.0G) >> >> diskXsys partitions are used for base system pool which is 4-way >> mirror >> >> diskXtank0 partitions are used for data storage as RAIDZ >> >> # zpool list >> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH >> ALTROOT >> sys 14.9G 11.0G 3.92G - 79% 73% 1.00x ONLINE - >> tank0 14.4T 10.8T 3.56T - 19% 75% 1.00x ONLINE - >> >> >> # zpool status -v >> pool: sys >> state: ONLINE >> scan: scrub repaired 0 in 1h2m with 0 errors on Sun Apr 24 >> 04:03:54 2016 >> config: >> >> NAME STATE READ WRITE CKSUM >> sys ONLINE 0 0 0 >> mirror-0 ONLINE 0 0 0 >> gpt/disk0sys ONLINE 0 0 0 >> gpt/disk1sys ONLINE 0 0 0 >> gpt/disk2sys ONLINE 0 0 0 >> gpt/disk3sys ONLINE 0 0 0 >> >> errors: No known data errors >> >> pool: tank0 >> state: ONLINE >> scan: scrub in progress since Sun Apr 24 03:01:35 2016 >> 7.63T scanned out of 10.6T at 36.7M/s, 23h32m to go >> 0 repaired, 71.98% done >> config: >> >> NAME STATE READ WRITE CKSUM >> tank0 ONLINE 0 0 0 >> raidz1-0 ONLINE 0 0 0 >> gpt/disk0tank0 ONLINE 0 0 0 >> gpt/disk1tank0 ONLINE 0 0 0 >> gpt/disk2tank0 ONLINE 0 0 0 >> gpt/disk3tank0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> >> # zdb | grep ashift >> ashift: 12 >> ashift: 12 >> >> >> Thank you for your informations. >> >> >>