Date: Fri, 30 Oct 2020 17:41:41 -0700 From: Cy Schubert <Cy.Schubert@cschubert.com> To: Matthew Macy <mat.macy@gmail.com> Cc: Cy Schubert <Cy.Schubert@cschubert.com>, Slawa Olhovchenkov <slw@zxy.spb.ru>, qroxana <qroxana@mail.ru>, freebsd-current <freebsd-current@freebsd.org> Subject: Re: OpenZFS: kldload zfs.ko freezes on i386 4GB memory Message-ID: <202010310041.09V0ffBL035185@slippy.cwsent.com> In-Reply-To: <CAPrugNoYZS4wcyrpQ0584jZM1zTnwds7rCQPtm5ahJ8Gm91H1A@mail.gmail.com> References: <E1kWvLj-0000GY-Ic.qroxana-mail-ru@smtp29.i.mail.ru> <202010300313.09U3D0KZ006216@slippy.cwsent.com> <20201030204622.GF2033@zxy.spb.ru> <202010302053.09UKrAXc031272@slippy.cwsent.com> <20201030220809.GG2033@zxy.spb.ru> <202010302234.09UMYA5d032018@slippy.cwsent.com> <20201030224734.GH2033@zxy.spb.ru> <202010302300.09UN0t4A032372@slippy.cwsent.com> <20201030233138.GD34923@zxy.spb.ru> <202010302350.09UNoVcM033686@slippy.cwsent.com> <CAPrugNoYZS4wcyrpQ0584jZM1zTnwds7rCQPtm5ahJ8Gm91H1A@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
In message <CAPrugNoYZS4wcyrpQ0584jZM1zTnwds7rCQPtm5ahJ8Gm91H1A@mail.gmail.c om> , Matthew Macy writes: > On Fri, Oct 30, 2020 at 4:50 PM Cy Schubert <Cy.Schubert@cschubert.com> wrote > : > > > > In message <20201030233138.GD34923@zxy.spb.ru>, Slawa Olhovchenkov writes: > > > On Fri, Oct 30, 2020 at 04:00:55PM -0700, Cy Schubert wrote: > > > > > > > > > > More stresses memory usually refers to performance penalty. > > > > > > > Usually way for better performance is reduce memory access. > > > > > > > > > > > > The reason filesystems (UFS, ZFS, EXT4, etc.) cache is to avoid dis > k > > > > > > accesses. Nanoseconds vs milliseconds. > > > > > > > > > > I mean compared ZoL ZFS ARC vs old (BSD/Opensolaris/Illumos) ZFS ARC. > > > > > Any reaason to rise ARC hit rate in ZoL case? > > > > > > > > That's what hit rate is. It's a memory access instead of a disk access. > > > > That's what you want. > > > > > > Is ZoL ARC hit rate rise from FreeBSD ARC hit rate? > > > > We don't know that. You should be able to find out by running some tests > > that would populate your ARC and run the test again. I see that my > > -DNO_CLEAN buildworlds run faster, when I run them a second or third time > > after making a minor edit, than they did before. Thus I assume it uses > > memory more efficiently. By default it stores more metadata in ARC, 75% > > instead of IIRC 25% by default. > > > > Getting back to your original question. A more efficient ARC would exercise > > your memory more intensely because you are replacing disk reads with memory > > reads. And as I said before the old ZFS "found" weak RAM on three separate > > occasions in three different machines over the last ten years. You're > > advised to replace the marginal memory. > > Ryan has been able to reproduce this in a VM with 4GB, similarly a VM > with 2GB loads just fine. It would seem that 4GB triggers a bug in > limit handling. We're hoping that we can simply lower one of the > default limits on i386 and make the problem go away. > > Please don't shoot the messenger when I observe that, generally > speaking, i386 is considered a self supported platform due to ZFS > general inability to perform well with limited memory or KVA. Long > mode has been available on virtually all processors shipped since > 2006. Yes, I was able to use ZFS on a 2 GB Pentium-M (i386) laptop for many years. ZFS worked well with a little tuning on such a small machine. Last time I booted it was late last year or early this year. It's in a drawer right now. I'll try to pull it out this coming week to test it out. Serendipitous that I was thinking about pulling out that old laptop to test out the new ZFS just last week. -- Cheers, Cy Schubert <Cy.Schubert@cschubert.com> FreeBSD UNIX: <cy@FreeBSD.org> Web: https://FreeBSD.org NTP: <cy@nwtime.org> Web: https://nwtime.org The need of the many outweighs the greed of the few.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202010310041.09V0ffBL035185>