From owner-freebsd-fs@freebsd.org Tue Jan 22 11:19:35 2019 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 49D1414B8917 for ; Tue, 22 Jan 2019 11:19:35 +0000 (UTC) (envelope-from andy@time-domain.co.uk) Received: from mail0.time-domain.co.uk (host81-142-251-212.in-addr.btopenworld.com [81.142.251.212]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail0", Issuer "mail0" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 1E3AC86FF0 for ; Tue, 22 Jan 2019 11:19:33 +0000 (UTC) (envelope-from andy@time-domain.co.uk) Received: from localhost (localhost [127.0.0.1]) by mail0.time-domain.co.uk (8.15.2/8.15.2) with ESMTPS id x0MBFxN3031040 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 22 Jan 2019 11:19:26 GMT (envelope-from andy@time-domain.co.uk) Date: Tue, 22 Jan 2019 11:15:59 +0000 (GMT) From: andy thomas X-X-Sender: andy-tds@mail0.time-domain.co.uk To: Ireneusz Pluta cc: freebsd-fs Subject: Re: ZFS on Hardware RAID In-Reply-To: Message-ID: References: <1180280695.63420.1547910313494.JavaMail.zimbra@gausus.net> <92646202.63422.1547910433715.JavaMail.zimbra@gausus.net> User-Agent: Alpine 2.21 (BSF 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset=US-ASCII X-Rspamd-Queue-Id: 1E3AC86FF0 X-Spamd-Bar: ++ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [2.78 / 15.00]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; HFILTER_HOSTNAME_4(2.50)[host81-142-251-212.in-addr.btopenworld.com]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[time-domain.co.uk]; AUTH_NA(1.00)[]; NEURAL_SPAM_MEDIUM(0.46)[0.458,0]; NEURAL_HAM_LONG(-0.86)[-0.862,0]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[smtp0.time-domain.co.uk]; RCPT_COUNT_TWO(0.00)[2]; NEURAL_HAM_SHORT(-0.06)[-0.063,0]; IP_SCORE(-0.14)[asn: 2856(-0.60), country: GB(-0.09)]; R_SPF_NA(0.00)[]; FREEMAIL_TO(0.00)[wp.pl]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:2856, ipnet:81.128.0.0/12, country:GB]; RCVD_COUNT_TWO(0.00)[2]; RCVD_TLS_ALL(0.00)[] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Jan 2019 11:19:35 -0000 On Sun, 20 Jan 2019, Ireneusz Pluta wrote: > W dniu 2019-01-20 o?09:45, andy thomas pisze: >> I run a number of very busy webservers (Dell PowerEdge 2950 with LSI >> MegaRAID SAS 1078 controllers) with the first two disks in RAID 1 as the >> FreeBSD system disk and the remaining 4 disks configured as RAID 0 virtual >> disks making up a ZFS RAIDz1 pool with 3 disks plus one hot spare. > In this configuration, have you ever made a test of causing a drive failure, > to see the hot spare activated? Yesterday I set up a spare Dell 2950 with Perc 5/i Integrated HBA and six 73 GB SAS disks, with the first two disks configured as a RAID 1 system disk (/dev/mfid0) and the remaining 4 disks as RAID 0 (mfid1- mfid4). After adding a freebsd-zfs GPT partition to each of these 4 disks I then created a RAIDz1 pool using mfid1p1, mfid2p1 and mfid3p1 with mfid4p1 as a spare, followed by creating a simple ZFS filesystem. After copying a few hundred MB of files to the ZFS filesystem, I yanked /dev/mfid3 out to simulate a disk failure. I was then able to manually detach the failed disk and replace it with the spare. Later, after pushing /dev/mfid3 back in followed by a reboot and scrubbing the pool, mfid4 automatically replaced the former mfid3 that was pulled out and mfid3 became the new spare. So a spare disk replacing a failed disk seems to be semi-automatic in FreeBSD (this was version 10.3) although I have seen fully automatic replacement on a Solaris parc platform. Andy