Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 09 Jul 2014 10:11:52 +0200
From:      Johan Hendriks <joh.hendriks@gmail.com>
To:        krad <kraduk@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Using 2 SSD's to create a SLOG
Message-ID:  <53BCF948.6010303@gmail.com>
In-Reply-To: <CALfReyeDpW-ELW9zmVFue%2Bw6HrO-iepwUcf2chwLHofFfshrcg@mail.gmail.com>
References:  <CAP1HOmRe5hEts6qA=0ZiNbMt%2B%2BDoyJO1f0fs=Wnx4ceC%2BygWeg@mail.gmail.com> <20140708025106.GA85067@neutralgood.org> <CALfReyeDpW-ELW9zmVFue%2Bw6HrO-iepwUcf2chwLHofFfshrcg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

op 09-07-14 09:21, krad schreef:
> An NFS server is a common task that generates lots of synchronous writes
>
>
> On 8 July 2014 03:51, <kpneal@pobox.com> wrote:
>
>> On Mon, Jul 07, 2014 at 06:06:43PM -0700, javocado wrote:
>>> I am interested in adding an SSD "SLOG" to my ZFS system so as to
>>> (dramatically) speed up writes on this system.
>>>
>>> My question is if ZFS will, itself, internally, mirror two SSDs that are
>>> used as a SLOG ?
>>>
>>> What I mean is, if ZFS is already smart enough to create a zpool mirror
>>> (or, on my case, a zpool raidz3) then perhaps ZFS is also smart enough to
>>> mirror the SLOG to two individual SSDs ?
>>>
>>> I am hoping to dumbly plug two SSDs onto motherboard SATA ports and just
>>> hand them over, raw, to ZFS.
>>  From the zpool man page:
>>
>>         Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
>>
>>         The following command creates a ZFS storage  pool  consisting  of
>>   two,
>>         two-way mirrors and mirrored log devices:
>>
>>           # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
>>              c4d0 c5d0
>>
>> You should be able to use that example to make the 'zpool add' command to
>> add the mirrored log to an existing pool.
>>
>> But know that the SLOG only helps writes that are synchronous. This is in
>> many workloads a small fraction of the total writes. For other workloads
>> it is a large portion of the writes.
>>
>> Do you know for certain that you need a SLOG?
>> --
>> Kevin P. Neal                                http://www.pobox.com/~kpn/
>>             On the community of supercomputer fans:
>> "But what we lack in size we make up for in eccentricity."
>>    from Steve Gombosi, comp.sys.super, 31 Jul 2000 11:22:43 -0600
>> _______________________________________________
>> freebsd-fs@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>>
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"

I would not use raw disks...
The way I add a SLOG to the system is as follows.


In my case I can hotswap disks so I insert one SSD, On the console it 
will show up as for example da10.
I also mark the disk with the label I use on the system in this example 
slog01.

Then I use gpart to label the disk

# gpart create -s gpt /dev/da10
# gpart add -t freebsd-zfs -a 4k -l slog01 /dev/da10

Then I Insert the second disk

If this disk shows up as da11 I use the following commands. I also label 
the disk with a sticker or pen slog02

# gpart create -s gpt /dev/da11
# gpart add -t freebsd-zfs -a 4k -l slog02 /dev/da11

This way I now for certain that the disk slog01.
If you do not label the disk itself and /dev/daxx numbers change you can 
remove the wrong disk...

Then I add the slog device to the pool. In the example my pool is named 
storage

# zpool add storage log mirror gpt/slog01 gpt/slog02

A zpool  status will show you the whole pool and you will see in the end 
the mirrored log device

san01 ~ # zpool status
   pool: storage
  state: ONLINE
  scan: scrub repaired 0 in 1h21m with 0 errors on Tue Jul  1 06:51:21 2014
config:

     NAME                STATE     READ WRITE CKSUM
     sanstorage          ONLINE       0     0     0
       mirror-0          ONLINE       0     0     0
         gpt/disk0       ONLINE       0     0     0
         gpt/disk1       ONLINE       0     0     0
       mirror-1          ONLINE       0     0     0
         gpt/disk2       ONLINE       0     0     0
         gpt/disk3       ONLINE       0     0     0
       mirror-2          ONLINE       0     0     0
         gpt/disk4       ONLINE       0     0     0
         gpt/disk5       ONLINE       0     0     0
       mirror-3          ONLINE       0     0     0
         gpt/disk12      ONLINE       0     0     0
         gpt/disk13      ONLINE       0     0     0
       mirror-4          ONLINE       0     0     0
         gpt/disk6       ONLINE       0     0     0
         gpt/disk7       ONLINE       0     0     0
       mirror-5          ONLINE       0     0     0
         gpt/disk10      ONLINE       0     0     0
         gpt/disk11      ONLINE       0     0     0
       mirror-6          ONLINE       0     0     0
         gpt/disk8       ONLINE       0     0     0
         gpt/disk9       ONLINE       0     0     0
     logs
       mirror-7          ONLINE       0     0     0
         gpt/slog01  ONLINE       0     0     0
         gpt/slog02  ONLINE       0     0     0

errors: No known data errors


The main advantage of the gpart label is that you can use it on every 
sata/sas port in the system.
If I use it in the front bays on the system they are known as daxx but I 
can put them on the sata controller on the motherboard if I want, they 
will become adaxx. Because ZFS uses the gpt labels it will always find 
them.

Please make sure you have a backup...
Also first try it on a virtual machine and get comfortable with the 
commands...

regards












Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?53BCF948.6010303>