Date: Fri, 1 Dec 2017 22:47:24 -0800 From: "K. Macy" <kmacy@freebsd.org> To: Dustin Wenz <dustinwenz@ebureau.com> Cc: "freebsd-virtualization@freebsd.org" <freebsd-virtualization@freebsd.org> Subject: Re: bhyve uses all available memory during IO-intensive operations Message-ID: <CAHM0Q_PRJMeW0jDWBMQG7yoXm16tacjyUVrO8EQgL_G7WXR1vA@mail.gmail.com> In-Reply-To: <59DFCE5F-029F-4585-B0BA-8FABC43357F2@ebureau.com> References: <F4E35CB9-30F9-4C63-B4CC-F8ADC9947E3C@ebureau.com> <CAHM0Q_MPNEBq=J9yJADhzA96nKvdgEiFESV-0Y9JB5mewfGspQ@mail.gmail.com> <59DFCE5F-029F-4585-B0BA-8FABC43357F2@ebureau.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Dec 1, 2017 at 9:23 PM, Dustin Wenz <dustinwenz@ebureau.com> wrote: > I have noticed significant storage amplification for my zvols; that could > very well be the reason. I would like to know more about why it happens. > > Since the volblocksize is 512 bytes, I certainly expect extra cpu overhead > (and maybe an extra 1k or so worth of checksums for each 128k block in the > vm), but how do you get a 10X expansion in stored data? > > What is the recommended zvol block size for a FreeBSD/ZFS guest? Perhaps 4k, > to match the most common mass storage sector size? I would err somewhat larger, the benefits of shallower indirect block chains will outweigh the cost of RMW I would guess. And I think it should be your guest file system block size. I don't know what ext4 is, but ext2/3 was 16k by default IIRC. -M > > - .Dustin > > On Dec 1, 2017, at 9:18 PM, K. Macy <kmacy@freebsd.org> wrote: > > One thing to watch out for with chyves if your virtual disk is more > than 20G is the fact that it uses 512 byte blocks for the zvols it > creates. I ended up using up 1.4TB only half filling up a 250G zvol. > Chyves is quick and easy, but it's not exactly production ready. > > -M > > > > On Thu, Nov 30, 2017 at 3:15 PM, Dustin Wenz <dustinwenz@ebureau.com> wrote: > > I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest OS is > also FreeBSD 11.1). Their sole purpose is to house some medium-sized > Postgres databases (100-200GB). The host system has 64GB of real memory and > 112GB of swap. I have configured each guest to only use 16GB of memory, yet > while doing my initial database imports in the VMs, bhyve will quickly grow > to use all available system memory and then be killed by the kernel: > > > kernel: swap_pager: I/O error - pageout failed; blkno 1735,size 4096, > error 12 > > kernel: swap_pager: I/O error - pageout failed; blkno 1610,size 4096, > error 12 > > kernel: swap_pager: I/O error - pageout failed; blkno 1763,size 4096, > error 12 > > kernel: pid 41123 (bhyve), uid 0, was killed: out of swap space > > > The OOM condition seems related to doing moderate IO within the VM, though > nothing within the VM itself shows high memory usage. This is the chyves > config for one of them: > > > bargs -A -H -P -S > > bhyve_disk_type virtio-blk > > bhyve_net_type virtio-net > > bhyveload_flags > > chyves_guest_version 0300 > > cpu 4 > > creation Created on Mon Oct 23 16:17:04 CDT 2017 by > chyves v0.2.0 2016/09/11 using __create() > > loader bhyveload > > net_ifaces tap51 > > os default > > ram 16G > > rcboot 0 > > revert_to_snapshot > > revert_to_snapshot_method off > > serial nmdm51 > > template no > > uuid 8495a130-b837-11e7-b092-0025909a8b56 > > > > I've also tried using different bhyve_disk_types, with no improvement. How > is it that bhyve can use far more memory that I'm specifying? > > > - .Dustin
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAHM0Q_PRJMeW0jDWBMQG7yoXm16tacjyUVrO8EQgL_G7WXR1vA>