Date: Fri, 12 Nov 2021 13:47:49 -0800 From: Pete Wright via freebsd-fs <freebsd-fs@freebsd.org> To: Chris Ross <cross+freebsd@distal.com>, Warner Losh <imp@bsdimp.com> Cc: Ronald Klop <ronald-lists@klop.ws>, freebsd-fs <freebsd-fs@freebsd.org> Subject: Re: swap_pager: cannot allocate bio Message-ID: <3b2b6c10-4a76-e7d4-c816-82fd8965316a@nomadlogic.org> In-Reply-To: <953DD67A-1A37-4D03-B878-E65396641B7D@distal.com> References: <9FE99EEF-37C5-43D1-AC9D-17F3EDA19606@distal.com> <09989390-FED9-45A6-A866-4605D3766DFE@distal.com> <op.1cpimpsmkndu52@joepie> <4E5511DF-B163-4928-9CC3-22755683999E@distal.com> <42006135.15.1636709757975@mailrelay> <7B41B7D7-0C74-4F87-A49C-A666DB970CC3@distal.com> <CANCZdfpW3YJ7c_EO82BYwLCFhDXdCp2W_fxmxAXzYvr7HNmnZQ@mail.gmail.com> <4008C512-31F1-4BE3-B674-A270CF674757@distal.com> <CANCZdfrY9YZ%2BrLpnhJgjxtkuYi5GnNcGU6SkZtJqhR9%2B_U44RA@mail.gmail.com> <953DD67A-1A37-4D03-B878-E65396641B7D@distal.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 11/12/21 11:59, Chris Ross wrote: > >> On Nov 12, 2021, at 14:52, Warner Losh <imp@bsdimp.com> wrote: >> My swap is on a partition on the non-ZFS disk. A physical disk as far as the kernel knows, hardware RAID1. >> >> # pstat -s >> Device 1K-blocks Used Avail Capacity >> /dev/da0p3 445682648 1018524 444664124 0% >> >> OK. That's well supported and should work w/o some of the issues that I raised. I'd misunderstood and thought you were swapping to zvols... >> >> Let me know if what you’re saying above is true to my case, and any advice as to how I can avoid it. I had a “not enough swap space†a while back, and accordingly increased the size of my swap partition. I have 128GB of memory, though between the ARC and the big process I was running, that fills it easily. >> >> Yea, this is a 'memory is exhausted' problem, and more swap won't help that. It's unclear why we run out so fast, and why the separate zones for the bio isn't providing a good level of insulation from out of memory scenarios. > Okay. Well, I can’t easily add more memory to this machine, though I am investigating it. I certainly can’t do it in short order. I presume the problem is that I recently increased the size of this pool by adding a large raidz vdev to it. I’ve only been seeing this since. Is there any way I can “limit†the perceived size of the ZFS filesystem to ease the problem? Is there anything I can tune to help? Can I turn off or drastically reduce the ARC? A decrease in performance would be better than getting stuck after a day or so. :-) I don't think this is "the right way to do things" *but* I have begun using this sysctl to limit the size of my arc*. the reason i say it's not the right way is because it may just paper over a real bug and preventing us from getting it fixed. might be worth testing though to see if it helps: # 25GB arc vfs.zfs.arc.max=25000000000 cheers, -pete *my use-case is for a system running a bunch of VM's and this has allowed me to avoid swapping. perf has been acceptable. -- Pete Wright pete@nomadlogic.org @nomadlogicLA
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3b2b6c10-4a76-e7d4-c816-82fd8965316a>