Date: Sat, 5 Feb 2011 17:56:09 +0000 From: krad <kraduk@gmail.com> To: Ivan Voras <ivoras@freebsd.org> Cc: freebsd-questions@freebsd.org Subject: Re: 4k drives and zfs Message-ID: <AANLkTik_6k7M6MhSZvLQNNA88AB9=3SM2MZrRBY93p8b@mail.gmail.com> In-Reply-To: <AANLkTikM4RhrgqeHhH6Wp1KMgu%2B3S9rtOe5r5gzvFFTp@mail.gmail.com> References: <AANLkTim6zvAtYnO=FhE_1%2BR44LDh9e7YUYTBW0VB7Zfb@mail.gmail.com> <iibi22$mom$1@dough.gmane.org> <AANLkTikM4RhrgqeHhH6Wp1KMgu%2B3S9rtOe5r5gzvFFTp@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 2 February 2011 15:20, krad <kraduk@gmail.com> wrote: > On 2 February 2011 12:18, Ivan Voras <ivoras@freebsd.org> wrote: >> On 02/02/2011 05:52, krad wrote: >>> >>> Hi All, >>> >>> A quick question. Im upgrading my filer at home to have 2x 2tb samsung >>> F4EG drives. I believe these are 4k drives. I'm intending to use the >>> gnop trick to get zfs ashift to 12. Will this make my pool unbootable. >>> I have read a few threads aluding to this. >> >> There have been bugs which make such drives unbootable but they have been >> fixed at least in CURRENT (I haven't tried it). >> >> _______________________________________________ >> freebsd-questions@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-questions >> To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.org" >> > > where they related to any type of pools in particular as im just mirroring > well they are in. i tested with the gnop trick and without. It didnt seem to make much difference to the performance of the drives. Certainly not enough for me to worry about. Thinking about it though as I was addin it to the existing pool as a mirror and then dropping the old drives out one by one, i was probably forced into the 512k sectors anyway. Just finishing up now by filling the pool with urandom, and reading it back. Its taking a while though. Do these values seem similar to what others get? Bare in mind I have a dd of /dev/zero and /dev/urandom running in parallel with a bs=128k # zpool iostat system 5 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- system 1.12T 709G 149 212 8.48M 15.8M system 1.12T 708G 2 336 2.44K 30.2M system 1.12T 708G 2 541 3.12K 57.0M system 1.12T 708G 1 349 6.05K 32.8M system 1.12T 708G 1 581 599 62.9M system 1.12T 707G 3 320 5.46K 30.7M # iostat -d 5 ad4 ad5 ad6 ad7 KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s 92.50 10 0.93 102.35 245 24.50 91.86 10 0.93 102.02 246 24.49 0.00 0 0.00 106.64 268 27.95 0.00 0 0.00 115.60 413 46.64 0.00 0 0.00 109.72 590 63.19 0.00 0 0.00 103.79 437 44.33 0.00 0 0.00 113.48 349 38.72 0.00 0 0.00 115.70 432 48.84 0.00 0 0.00 106.66 547 57.02 0.00 0 0.00 103.98 461 46.84 0.00 0 0.00 117.52 406 46.62 0.00 0 0.00 117.12 407 46.59 0.00 0 0.00 110.43 565 60.92 0.00 0 0.00 109.64 601 64.37 0.00 0 0.00 119.11 282 32.81 0.00 0 0.00 117.87 254 29.27 # zpool status pool: system state: ONLINE scrub: scrub completed after 2h2m with 0 errors on Sat Feb 5 11:47:21 2011 config: NAME STATE READ WRITE CKSUM system ONLINE 0 0 0 mirror ONLINE 0 0 0 label/red ONLINE 0 0 0 label/blue ONLINE 0 0 0
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTik_6k7M6MhSZvLQNNA88AB9=3SM2MZrRBY93p8b>