Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Jul 2019 16:34:08 +0200
From:      "Nagy, Attila" <bra@fsn.hu>
To:        "Sam Fourman Jr." <sfourman@gmail.com>
Cc:        FreeBSD FS <freebsd-fs@freebsd.org>
Subject:   Re: ZFS exhausts kernel memory just by importing zpools
Message-ID:  <21b04b21-8850-c3c3-36c9-a0d0ede4dc22@fsn.hu>
In-Reply-To: <CAOFF%2BZ1rja=ALCJG9Mk7dycRqwErk7uVvBoE%2B3TYxS8qgkLAUw@mail.gmail.com>
References:  <e542dfd4-9534-1ec7-a269-89c3c20cca1d@fsn.hu> <CAOFF%2BZ1rja=ALCJG9Mk7dycRqwErk7uVvBoE%2B3TYxS8qgkLAUw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi,

Oh, should've written about that: no I don't use (and never used) dedup.

On 2019. 07. 02. 18:13, Sam Fourman Jr. wrote:
> Hello,
>
> My initial guess is that you may have de-duplication enabled on one 
> (or more) of the underlying datasets.
> **if** this is the case, a simple solution is to add more memory to 
> the machine. (64GB of memory is not sufficient for dedup to be enabled )
>
> -- Sam Fourman Jr.
>
> On Tue, Jul 2, 2019 at 10:59 AM Nagy, Attila <bra@fsn.hu 
> <mailto:bra@fsn.hu>> wrote:
>
>     Hi,
>
>     Running latest stable/12 on amd64 with 64 GiB memory on a machine
>     with
>     44 4T disks. Each disks have its own zpool on it (because I solve the
>     redundancy between machines and not locally with ZFS).
>
>     One example zpool holds 2.2 TiB of data (according to df) and have
>     around 75 million files in hashed directories, this is the typical
>     usage
>     on them.
>
>     When I import these zpools, top says around 50 GiB wired memory
>     (ARC is
>     minimal, files weren't yet touched) and after I start to use (heavy
>     reads/writes) the pools, the free memory quickly disappears (ARC
>     grows)
>     until all memory is gone and the machine starts to kill processes,
>     ends
>     up in a deadlock, where nothing helps.
>
>     If I import the pools one by one, each of them adds around 1-1.5
>     GiB of
>     wired memory.
>
>     Top shows this, right after it came to a halt and nothing else
>     works (I
>     can't log in even on the console):
>
>     last pid: 61878;  load averages:  5.05,  4.42,  2.50    up 0+01:07:23
>     15:45:17
>     171 processes: 1 running, 162 sleeping, 1 stopped, 1 zombie, 6 waiting
>     CPU:  0.0% user,  0.0% nice,  0.2% system,  0.0% interrupt, 99.8% idle
>     Mem: 7716K Active, 8192 Inact, 84K Laundry, 57G Wired, 180M Buf,
>     14M Free
>     ARC: 21G Total, 10G MFU, 4812M MRU, 4922M Anon, 301M Header, 828M
>     Other
>           5739M Compressed, 13G Uncompressed, 2.35:1 Ratio
>     Swap:
>
>        PID USERNAME    THR PRI NICE   SIZE    RES STATE    C TIME    WCPU
>     COMMAND
>     61412 root          1  20    0    14M  3904K CPU14   14 0:06 1.55% top
>     57569 redis        57  20    0  1272M    64M uwait   22 4:28 0.24%
>     consul
>       5574 root          1  20    0    13M  3440K nanslp  10 0:02  
>     0.05% gstat
>       5557 root          1  20    0    20M  7808K select  20 0:00  
>     0.01% sshd
>       5511 root          1  20    0    20M  7808K select   4 0:01  
>     0.01% sshd
>       4955 root          1  20    0    10M  1832K select   9 0:00   0.01%
>     supervis
>       5082 root          1  20    0    25M    14M select   0 0:00  
>     0.00% perl
>       4657 _pflogd       1  20    0    12M  2424K bpf      1 0:00  
>     0.00% pflogd
>       5059 elasticsea    2  20  -20  6983M   385M STOP     5 1:29  
>     0.00% java
>     61669 root          1  26    0    23M      0 pfault   4 0:14 0.00%
>     <python3
>     61624 root          1  20  -20    24M    14M buf_ha   9 0:09 0.00%
>     python3.
>     61626 root          1  20  -20    23M    16K pfault   0 0:08 0.00%
>     python3.
>     61651 root          1  20  -20    23M    14M buf_ha  10 0:08 0.00%
>     python3.
>     61668 root          1  20  -20    23M    13M buf_ha  20 0:08 0.00%
>     python3.
>
>     I've already tried to shrink ARC and vm.kmem_size without too much
>     success.
>
>     Any ideas what causes this?
>
>     _______________________________________________
>     freebsd-fs@freebsd.org <mailto:freebsd-fs@freebsd.org> mailing list
>     https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>     To unsubscribe, send any mail to
>     "freebsd-fs-unsubscribe@freebsd.org
>     <mailto:freebsd-fs-unsubscribe@freebsd.org>"
>
>
>
> -- 
>
> Sam Fourman Jr.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?21b04b21-8850-c3c3-36c9-a0d0ede4dc22>