Date: Fri, 4 Aug 1995 13:27:44 -0800 From: "Jim Howard" <jiho@sierra.net> To: davidg@root.com, freebsd-questions@freefall.cdrom.com Subject: Re: 2.0.5 Eager to go into swap Message-ID: <199508042124.AA04253@diamond.sierra.net>
next in thread | raw e-mail | index | archive | help
The citations that follow are from davidg@root.com (David Greenman). > We use the Berkeley malloc by default which causes power of 2 allocations > to allocate twice as much memory as is needed. It's a function of its design - > it takes a few bytes more than it needs for the allocation, and the allocation > buckets are power of 2. So a request for a power of 2 amount causes the > allocation to fall into the next bucket (which is twice as large). Now here's a point even I can understand. Thank you for the explanation. And it's easy to fix--just use a different malloc(). This seems to be another example of how BSD was optimized for the VAX as a multiuser system. About the only way to speed a VAX up was to add RAM, so whenever Berkeley had a tradeoff to make of speed versus space, they optimized for speed (however trivial it may seem now) at the expense of RAM space. All perfectly legitimate, and the net result we got was an OS with speed to burn on today's PCs. Some of those choices do now appear a bit out of place on the standalone PC, though. > It is escentially blind. The policy of which pages to reclaim is based on > frequency of page usage, not on how many people have it mapped. This I still don't quite understand. I know what you mean here, once the decision is made to swap the swapper doesn't care whether a given page is shared or not, and there's no reason why it should. That makes perfect sense. I was thinking of the higher-level decision to swap at all, based on how many total pages are physically occupied. How is this decision reached? The issue arose because in looking at the "run time set size" figures from ps, and the swapinfo figure on how much of the swap partition had been used, versus how much physical RAM I had (after counting out kernel overhead, including buffer cache on 2.0), I couldn't help thinking that the "rss" figures added up to just about the same ball park as RAM usage plus swap partition usage. In other words, it looked like the decision that swapping was necessary was blind to page-sharing overlap among the processes, so swapping started well before physical RAM was actually exhausted. That level of blindness. In another message, Dyson implies strongly that this is not the case, but he isn't very explicit, so I'm not really sure he understood me either. He talks mostly about CPU usage, an issue I've never raised. (But see my comment above about BSD being optimized for the VAX.) > Horrendous swapping? If that is true on machines with >=16MB of memory, > then perhaps there is a problem...but that doesn't match my own experiance. Well, I consider it horrendous with 8 MB. But I agree that's true for any platform running X, not just FreeBSD. If you accept X's requirements as normal, then 16 MB is a reasonable requirement. > > I had more trouble running > >out of swap with X under Linux than I've had under FreeBSD. > > That begs the question "Then why are you complaining?". :-) Well, I'm not complaining so much as trying to learn. Some of my "complaints" are more like impressions asking to be corrected with information. Others are paranoid delusions, and a few of them are ongoing disagreements with reality. > >This all seems to be degenerating into a useless flap without addressing > >the original issue much. > > > >I remember reading a gripe column in BYTE, where a networking support > >guy at Cray was complaining because customers wanted to run X on > >laptops and he couldn't do it for them. He recalled a time when X ran fine > >on Suns with 4 MB of RAM. Now we have a user who likes FreeBSD > >because it runs X fine with 16 MB (although others dispute that even with > >32 MB). I don't see where the basic server (extensions aside) has acquired > >much new functionality to account for the difference. It's just quadrupled > >in size and extrapolated its RAM requirements. > > > >Maybe that's why nobody wants to deal with this issue--it collapses into a > >flame war and nobody can do anything about it anyway! > > There are different levels of "fine"ness. I had a VAXstation-2000 here for > awhile. It runs a version of X11R3...and has only 6MB of RAM. It is absolutely > DOG slow...and not because the CPU is slow, but because it thrashes > constantly. Using it was an absolute pain. Similarly, a MicroVAX with 16MB of > memory also paged a little when running X - especially once you start using > things like gcc which is a complete memory pig (needing >3MB of memory). Well, maybe the guy at Cray was stretching the case. All hearsay anyway, the first version I saw was X11R5. Maybe he was talking about the GUI Sun used BEFORE they adopted X?!? But GUIs in general used to be much smaller than they are today--they had to be, because the machines just didn't have the RAM. Atari would have used the original Microsoft Windows in their 520ST, had it shipped on time. Look at Windows now, they can't make it fit into PDAs. Or compare Bell Labs' MGR to X. Silicon Graphics is trying to figure out how to make an X-based TV box for the "Information Superhighway" that consumers can actually afford. GNU software does tend to be piggy. All the GNU packages are big to start with, and they all want to compile with debugging support by default, tripling the basic size. Enough ranting and raving. It all has to do with background. I come from an early microcomputer background myself. Berkeley and the X Consortium and the Free Software Foundation are all minicomputer/workstation oriented organizations. Things that make sense from that perspective look outrageous from my pre-PC point of view. (Again, see my comment above about BSD on the VAX.)
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199508042124.AA04253>