From owner-freebsd-stable@FreeBSD.ORG Tue Jan 26 19:20:17 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 94D671065692 for ; Tue, 26 Jan 2010 19:20:17 +0000 (UTC) (envelope-from sty@blosphere.net) Received: from mail-pz0-f176.google.com (mail-pz0-f176.google.com [209.85.222.176]) by mx1.freebsd.org (Postfix) with ESMTP id 40B188FC1D for ; Tue, 26 Jan 2010 19:20:17 +0000 (UTC) Received: by pzk6 with SMTP id 6so174820pzk.3 for ; Tue, 26 Jan 2010 11:20:16 -0800 (PST) MIME-Version: 1.0 Sender: sty@blosphere.net Received: by 10.114.44.9 with SMTP id r9mr2468391war.179.1264532001033; Tue, 26 Jan 2010 10:53:21 -0800 (PST) In-Reply-To: References: Date: Wed, 27 Jan 2010 03:53:20 +0900 X-Google-Sender-Auth: ea9587900e9660a6 Message-ID: From: =?UTF-8?B?VG9tbWkgTMOkdHRp?= To: Dan Naumov Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: FreeBSD-STABLE Mailing List Subject: Re: immense delayed write to file system (ZFS and UFS2), performance issues X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Jan 2010 19:20:17 -0000 > =C2=A09 Power_On_Hours =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00x0032 =C2=A0 10= 0 =C2=A0 100 =C2=A0 000 =C2=A0 =C2=A0Old_age > Always =C2=A0 =C2=A0 =C2=A0 - =C2=A0 =C2=A0 =C2=A0 136 > 193 Load_Cycle_Count =C2=A0 =C2=A0 =C2=A0 =C2=A00x0032 =C2=A0 199 =C2=A0 = 199 =C2=A0 000 =C2=A0 =C2=A0Old_age > Always =C2=A0 =C2=A0 =C2=A0 - =C2=A0 =C2=A0 =C2=A0 5908 > > The disks are of exact same model and look to be same firmware. Should > I be worried that the newer disk has, in 136 hours reached a higher > Load Cycle count twice as big as on the disk thats 5253 hours old? Well AFAIK WD certifies that there's no extra risk involved unless you go over 300.000 park cycles. On the other hand, my 9 month 1.5tb green drive has over 200.000 cycles. Maybe check if you can disable the idle timer using WDIDLE3... works for my drives (although it did some strange things to one out of the 6 drives --> decreased reported sector count and the zfs invalidated the pool :/ ). --=20 br, Tommi