From owner-freebsd-fs@freebsd.org Tue Apr 26 14:02:40 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6C717B1D993 for ; Tue, 26 Apr 2016 14:02:40 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-wm0-x22d.google.com (mail-wm0-x22d.google.com [IPv6:2a00:1450:400c:c09::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 01E7D17B8 for ; Tue, 26 Apr 2016 14:02:39 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: by mail-wm0-x22d.google.com with SMTP id v188so130868729wme.1 for ; Tue, 26 Apr 2016 07:02:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=Rpw4vzDYYQQZ6RMZTDwF06nfQuakZcMOv6Z0eJZq2dw=; b=eVgdnY7ezhfoTbJLtTMGbui9AWjLGLNS5FynE54qbaxS4iZZXWtrvRo8fjaG7UE3k1 zSXXcs6H8fcMrJuYOL8gPKhMFQsgEgY0Ymb7tjWvo23L+XEqHF6AEs+dKsEDa9cunl8q tdjBihqpHKCBond1F8bqeIX82RxENWm7E8LyBR5rGseSOdzTar50ssmMdl2DWzLUtunD 08zYT88E08XRrSkxmbbxUB+JnIcoqDmU1eHVTv2VN3Rkw5keWpx0LYgwYHbHjDqYEOho iyHoMhpGHzkjvX5knf1j0IRDTMwDksKchd8aAxZH3zNGIOx7g9u7KEa4QzV/J2ZBktDs 1qMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=Rpw4vzDYYQQZ6RMZTDwF06nfQuakZcMOv6Z0eJZq2dw=; b=OgzvoIAzsCG5oTx7QgShgmHG349SZ2Z+ZY7YU/BCexZeWJtf5oOPfadiG/qdl3FBQA qmg+EgvfLLSvIehKnGVplLxRgunfeiNqfcohe37juBNgpJVF9+kOr00vJN7UZliBSq4c j+72AdSYFnUGWhaTWug75QTb6Qr8IrjfPpGUqU2TXi9i5UNeUxC+3CZ3IrPZzIwzq6v1 zitSWWGUsNiH7TI9i8jMmiTCf1N6PEztPPp6FnJzcixV692cY1Dw+pszOiyqKpW/gEBl flUmmA0xWsS+RdzrehaS8VoELolaRbrbFty6FWywzdhnqCJsBfND76hSnZCLaiMooa3/ EEtg== X-Gm-Message-State: AOPr4FUDfUNqqe/3GvOoinCN9EgVtvYX9ZTp1TT+jsVUFM9KvRVBKhgCXNW4Zq21jBmNlEY95j3wYkg5jThSgg== MIME-Version: 1.0 X-Received: by 10.194.175.168 with SMTP id cb8mr3470672wjc.56.1461679350026; Tue, 26 Apr 2016 07:02:30 -0700 (PDT) Received: by 10.28.46.67 with HTTP; Tue, 26 Apr 2016 07:02:29 -0700 (PDT) In-Reply-To: <571F6EA4.90800@quip.cz> References: <571F62AD.6080005@quip.cz> <571F687D.8040103@internetx.com> <571F6EA4.90800@quip.cz> Date: Tue, 26 Apr 2016 15:02:29 +0100 Message-ID: Subject: Re: How to speed up slow zpool scrub? From: krad To: Miroslav Lachman <000.fbsd@quip.cz> Cc: jg@internetx.com, FreeBSD FS Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.21 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Apr 2016 14:02:40 -0000 Erk, i would try to move your system off those data disks as you have two pools competing for the disk spindles. This is never ideal. You can by all means backup your os to those data pools but keep them on separate physical mediums. A couple of small SSD would do the trick nicely and could probably be added with no down time. You would probably want to find a suitable window though to make sure the box reboots nicely though. On 26 April 2016 at 14:35, Miroslav Lachman <000.fbsd@quip.cz> wrote: > InterNetX - Juergen Gotteswinter wrote on 04/26/2016 15:09: > >> to speed up the scrub itself you can try >> >> sysctl -w vfs.zfs.scrub_delay = 4 (default, 0 means higher prio) >> > > I will try it in the idle times > > but be careful as this can cause a serious performance impact, the value >> can be changed on the fly >> >> your pool is raidz, mirror ? dedup is hopefully disabled? >> > > I forgot to mention it. Disks are partitioned to four partitions: > > # gpart show -l ada0 > => 34 7814037101 ada0 GPT (3.6T) > 34 6 - free - (3.0K) > 40 1024 1 boot0 (512K) > 1064 10485760 2 swap0 (5.0G) > 10486824 31457280 3 disk0sys (15G) > 41944104 7769948160 4 disk0tank0 (3.6T) > 7811892264 2144871 - free - (1.0G) > > diskXsys partitions are used for base system pool which is 4-way mirror > > diskXtank0 partitions are used for data storage as RAIDZ > > # zpool list > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > sys 14.9G 11.0G 3.92G - 79% 73% 1.00x ONLINE - > tank0 14.4T 10.8T 3.56T - 19% 75% 1.00x ONLINE - > > > # zpool status -v > pool: sys > state: ONLINE > scan: scrub repaired 0 in 1h2m with 0 errors on Sun Apr 24 04:03:54 2016 > config: > > NAME STATE READ WRITE CKSUM > sys ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > gpt/disk0sys ONLINE 0 0 0 > gpt/disk1sys ONLINE 0 0 0 > gpt/disk2sys ONLINE 0 0 0 > gpt/disk3sys ONLINE 0 0 0 > > errors: No known data errors > > pool: tank0 > state: ONLINE > scan: scrub in progress since Sun Apr 24 03:01:35 2016 > 7.63T scanned out of 10.6T at 36.7M/s, 23h32m to go > 0 repaired, 71.98% done > config: > > NAME STATE READ WRITE CKSUM > tank0 ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > gpt/disk0tank0 ONLINE 0 0 0 > gpt/disk1tank0 ONLINE 0 0 0 > gpt/disk2tank0 ONLINE 0 0 0 > gpt/disk3tank0 ONLINE 0 0 0 > > errors: No known data errors > > > # zdb | grep ashift > ashift: 12 > ashift: 12 > > > Thank you for your informations. > > > > Am 4/26/2016 um 2:44 PM schrieb Miroslav Lachman: >> >>> Hi, >>> >>> is there any way to make zpool scrub faster? >>> We have one older machine with CPU Pentium(R) Dual E2160 @1.80GHz, 5GB >>> of RAM and 4x 4TB HDDs. It is just a storage for backups for about 20 >>> machines. >>> Scrub is scheduled from periodic each 30 days but it takes about 4 days >>> to complete and everything during scrub is slow. Backups takes 8 hours >>> instead of 5 (made by rsync), deleting of old files is even more slower. >>> >>> The backups are made every night from the midnight to morning, the >>> machine is idle for the rest of the day. >>> >>> Is there any tuning to make scrub faster in this idle time? >>> Or is it better to do it other way - slower scrub with even lower >>> priority taking for about one week but not affecting time of normal >>> operations? (is it dangerous to have scrub running this long or reboot >>> machine during the scrub?) >>> >>> I have a performance graphs of this machine and CPU is about 70% idle >>> during scrub, but hard drives are busy 75% (according to iostat) >>> >>> FreeBSD 10.3-RELEASE amd64 GENERIC >>> >>> Miroslav Lachman >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >> >> > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >