From owner-freebsd-fs@FreeBSD.ORG Thu Jul 4 20:56:45 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id AB20039F for ; Thu, 4 Jul 2013 20:56:45 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-qe0-x22a.google.com (mail-qe0-x22a.google.com [IPv6:2607:f8b0:400d:c02::22a]) by mx1.freebsd.org (Postfix) with ESMTP id 6D5311E73 for ; Thu, 4 Jul 2013 20:56:45 +0000 (UTC) Received: by mail-qe0-f42.google.com with SMTP id s14so944442qeb.1 for ; Thu, 04 Jul 2013 13:56:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=71zB1ShQL8wMOQ/kMK4DhSN9qy41kWRVxB7T9WZVhk4=; b=n1DgRrpUhaini932NlyH6uCbxGQDFFPPFX7pIbQoUxeOT7IXUslNB2FNWRK5cprpua RLpdzKKs/+JyXyN1G6c7Mwfv2uyrt+By3u6SXgim2TuI24WRlYuSN48JTvEexkDe1J2s BFd6NiVuXl1kYSkYpxhW8KWHLlZQSjRvZQ3Yu2FLG4y9WaxQaNVlitG/8NMlYY1+k6Ee B5mr/PLyiFdmm3dthWheA4OTLRn8mICZDKzsWUfup2VT2Rc9zevMe/mJQQKOHi5pUCuS w9GwivmxpqQrEXRtMteP9YKxEXv1zZ8AqZXvZ5YaBEPtaiWkOH3Gr0xqd6HKnFVyIwkb nddw== MIME-Version: 1.0 X-Received: by 10.224.79.14 with SMTP id n14mr7862297qak.114.1372971404931; Thu, 04 Jul 2013 13:56:44 -0700 (PDT) Received: by 10.49.49.135 with HTTP; Thu, 4 Jul 2013 13:56:44 -0700 (PDT) In-Reply-To: <20130704191203.GA95642@icarus.home.lan> References: <51D42107.1050107@digsys.bg> <2EF46A8C-6908-4160-BF99-EC610B3EA771@alumni.chalmers.se> <51D437E2.4060101@digsys.bg> <20130704000405.GA75529@icarus.home.lan> <20130704171637.GA94539@icarus.home.lan> <2A261BEA-4452-4F6A-8EFB-90A54D79CBB9@alumni.chalmers.se> <20130704191203.GA95642@icarus.home.lan> Date: Thu, 4 Jul 2013 13:56:44 -0700 Message-ID: Subject: Re: Slow resilvering with mirrored ZIL From: Freddie Cash To: Jeremy Chadwick Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Jul 2013 20:56:45 -0000 On Thu, Jul 4, 2013 at 12:12 PM, Jeremy Chadwick wrote: > I believe -- but I need someone else to chime in here with confirmation, > particularly someone who is familiar with ZFS's internals -- once your > pool is ashift 12, you can do a disk replacement ***without*** having to > do the gnop procedure (because the pool itself is already using ashift > 12). But again, I need someone to confirm that. > Correct. The ashift property of a vdev is set at creation time and cannot be changed (AFAIK) without destroying/recreating the pool. Thus, you can use gnop to create the vdev with ashift=12, and then just do normal "zpool replace" or "zpool detach/attach" to replace drives in the vdevs (512b or 4K drives) without gnop. Haven't read the code :) but I have done many, many drive replacements on ashift=9 and ashift=12 vdevs and watched what happens via zdb. :) The WD10EARS are known for excessively parking their heads, which > causes massive performance problems with both reads and writes. This is > known by PC enthusiasts as the "LCC issue" (LCC = Load Cycle Count, > referring to SMART attribute 193). > > On these drives there are ways to work around this issue -- it > specifically involves disabling drive-level APM. To do so, you have to > initiate a specific ATA CDB to the drive using "camcontrol cmd", and > this has to be done every time the system reboots. There is one > drawback to disabling APM as well: the drives run hotter. > On some WD Green drives, depending on the firmware and manufacturing date, you can use the wdidle3.exe program (via a DOS boot) to set the timeout to either "disabled" or "15 minutes" which is usually enough to prevent most of the head-parking wear-out issues. However, I believe this only worked up until Dec 2011 or Dec 2012? We had the misfortune of using 12 of these in a ZFS storage box when they were first released (2 TB for under $150? Hell Yeah! Ooops, you get what you pay for ...). We quickly replaced them. You really need to be running stable/9 if you want to use SSDs with ZFS. > I cannot stress this enough. I will not bend on this fact. I do not > care if what people have are SLC rather than MLC or TLC -- it doesn't > matter. TRIM on ZFS is a downright necessity for long-term reliability > of an SSD. Anyway... > One can mitigate this a little by leaving 25% of the SSD unpartitioned/unformatted, thus allowing the background GC process to work without impacting performance and providing long-term performance that's close to (but not quite 100%) after-TRIM performance. Takes a lot of will-power to leave 8-16-odd GB free on an SSD that cost close to $200, though. :) It's not perfect, it's not as good as using TRIM, but at least it's doable on FreeBSD pre-9.1-STABLE. > > You should probably be made aware of the fact that SSDs need to be > kept roughly 30-40% unused to get the most benefits out of wear > levelling. Once you hit the 20% remaining mark, performance takes a > hit, and the drive begins hurting more and more. Low-capacity SSDs > are therefore generally worthless given the capacity limitation need. > Ah, I see you mention what I did above. :) Guess that's what I get for not reading all the way through before starting a reply. :) -- Freddie Cash fjwcash@gmail.com