Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 27 Mar 2015 19:43:22 +1100
From:      Peter Jeremy <peter@rulingia.com>
To:        Brett Wynkoop <freebsd-arm@wynn.com>
Cc:        freebsd-arm@freebsd.org
Subject:   Re: ZFS on RPi (was: zfs on BeagleBone)
Message-ID:  <20150327084322.GD41630@server.rulingia.com>
In-Reply-To: <20150323034147.476a6f3f@ivory.wynn.com>
References:  <20150311130932.3493a938@ivory.wynn.com> <20150321065346.GA64358@server.rulingia.com> <20150323034147.476a6f3f@ivory.wynn.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--Kj7319i9nmIyA2yE
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On 2015-Mar-23 03:41:47 -0400, Brett Wynkoop <freebsd-arm@wynn.com> wrote:
>On Sat, 21 Mar 2015 17:53:46 +1100
>Peter Jeremy <peter@rulingia.com> wrote:
>
>
>> panic: vm_fault: fault on nofault entry, addr: dd2f1000
>> KDB: stack backtrace:
>> Uptime: 11m46s
>> Physical memory: 473 MB
>> Dumping 36 MB:sdhci_bcm0: DMA in use
>>=20
>> The tuning I did was:
>> vfs.zfs.arc_max=3D"24M"
>> vfs.zfs.vdev.cache.size=3D"5M"
>
>Sorry for the delay.  Other things have kept me from being as attentive
>to the arm list as I might like.

That's OK.  Thanks for the response.  I was in the middle of a buildworld
so the following are a fresh world at head r280279.

>I strongly suggest setting vm.kmem_size to your real memory and do the
>same with vm.kmem_size_max.  I came up with this after doing loads of
>reading for zfs on memory restricted systems.

I had deliberately not set vm.kmem_size or vm.kmem_size_max because
the defaults seemed reasonable:
hw.physmem: 495562752
hw.usermem: 469278720
hw.realmem: 536866816
vm.kmem_size: 161853440
vm.kmem_size_min: 12582912
vm.kmem_size_max: 422366413
vm.kmem_map_size: 13819904
vm.kmem_map_free: 148033536

I tried tuning vm.kmem_size{,_max} to hw.physmem and it still crashed.
vm.kmem_size: 495562752
vm.kmem_size_min: 12582912
vm.kmem_size_max: 495562752

"vmstat 1" starting roughly the same time as I created the pool:
procs  memory      page                       disks     faults         cpu
r b w  avm   fre   flt  re  pi  po    fr   sr mm0 md0   in    sy    cs us s=
y id
1 0 0 273M  405M     0   0   0   0     0    5   0   0 2860   144   403  0  =
2 98
1 0 0 274M  405M    27   0  13   0     2    5  11   0 3341   280  1282  1  =
5 95
0 1 0 285M  401M   361   0  11   0   195   12  19   0 9592   501  7453  1 9=
5  4
0 1 0 285M  401M     0   0   0   0     0    6  12   0 2625   134   527  0  =
3 97
0 1 0 285M  401M     0   0   0   0     0    6  21   0 2661   139   643  0  =
2 98
0 1 0 285M  401M     0   0   0   0     0    6  15   0 2600   130   558  0  =
2 98
0 1 0 285M  401M     0   0   0   0     0    6  13   0 2557   130   536  0  =
2 98
0 1 0 285M  401M     0   0   0   0     0    6  41   0 3309   129   855  0  =
3 97
0 1 0 285M  401M     0   0   0   0     0    6 193   0 3984   129  2568  1  =
9 91
0 1 0 285M  401M     0   0   0   0     0    6 324   0 4211   140  4018  0  =
8 92
0 1 0 285M  401M     1   0   0   0     0    6 324   0 4160   128  4014  0 1=
3 87
0 1 0 285M  401M     0   0   0   0     0    6 323   0 4224   127  3999  0 1=
2 88
0 1 0 285M  401M     0   0   0   0     0    6 324   0 4214   128  4029  0 1=
3 87
0 1 0 285M  401M     0   0   0   0     0    6 325   0 4127   131  4026  0  =
5 95
0 1 0 285M  401M     0   0   0   0     0    6 324   0 4107   140  3977  0 1=
3 87
0 1 0 285M  401M     0   0   0   0     0    6 324   0 4179   130  4027  1  =
9 90
0 1 0 285M  401M     0   0   0   0     0    6 324   0 4179   130  4035  0 1=
2 88
[panic at this point]

FWIW, the command I used was:
zpool create -O atime=3Doff -O compression=3Dlz4 tank mmcsd0s2d

The slice is ~12GB:
   6821865  24256512         4  freebsd-zfs  (12G)

Looking at the disk, it looks like the ZFS labels were written, though
"zfs import" can't see it.  All 4 labels look like:

--------------------------------------------
LABEL 0
--------------------------------------------
    version: 5000
    name: 'tank'
    state: 0
    txg: 0
    pool_guid: 15675417041144722368
    hostid: 3523104732
    hostname: 'rpi1.rulingia.com'
    top_guid: 17877609725061934307
    guid: 17877609725061934307
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 17877609725061934307
        path: '/dev/mmcsd0s2d'
        phys_path: '/dev/mmcsd0s2d'
        whole_disk: 1
        metaslab_array: 0
        metaslab_shift: 0
        ashift: 9
        asize: 12414615552
        is_log: 0
        create_txg: 4
    features_for_read:
    create_txg: 4

--=20
Peter Jeremy

--Kj7319i9nmIyA2yE
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQJ8BAEBCgBmBQJVFRgqXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFRUIyOTg2QzMwNjcxRTc0RTY1QzIyN0Ux
NkE1OTdBMEU0QTIwQjM0AAoJEBall6Dkogs0SakP/3Fe271go47uSBnKGsBCE9Mf
ygGs8OHT3IVIuNlU1iS+WnMh+vLcPl6BPUznBH5A+IZeds2rru562dLSmXEnvYgj
aPNfojTEg+b8mUfjs85f2jIfvVXnDlavuu9cWg0x0BvQvxE65jsNyl15gDysJRmq
mbAtgW7JHDNVgr2g/1aJxRe0TNGcM5dILhgwQ7ZHeN8UKg4GbkzbNOYncPKVUVGO
882m/9Qdvgg5ZIm9134H7nkiNH7beHLu90+FzcWTyvyoOFh3EWEJauw7nv4ZVWAL
W4LM+PM7Upl/89xzwM1dlzfMp07A4/G7klksvXTjx7Z1wiue/nI/6qwYBashug5c
/BmyJnKhL2OXWdBfkmqptQpdmdW1kWEsiuX8wYRS4ywbl4UdrM5ANOTUdzLrVOhD
16QELTwkZMuxcgdXWvlCdrigaqXV7mAAdS0YnI1xFAQrYhvXR4YkZgJ0Gjat19xM
7oPTZahPLD+wQjkeS+WOFhB27n+ym0bMtHr1hZx6PKSBf7UJMMdcEr+wTKGoS5Dk
20iRvPcD3iXviIHaYKOlfDNzgL3Evj77PsLm8FCvW92wDhngtiEFeyUS6FhrmYPj
RDy3SopySJkTfDXeWg+lD6eMXfRzm5fmllGoxnlXe7jU4Dn2fDzwE7nw/CM6PB+w
E3yayn2lWwk4CD4Cfbn4
=jdQ4
-----END PGP SIGNATURE-----

--Kj7319i9nmIyA2yE--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20150327084322.GD41630>