Date: Tue, 27 Sep 2005 13:42:57 +0100 From: Brian Candler <B.Candler@pobox.com> To: Steve Rieger <steve.rieger@tbwachiat.com> Cc: freebsd-isp@freebsd.org Subject: Re: FATAL: erealloc(): Unable to allocate 577925121 bytes Message-ID: <20050927124256.GA49100@uk.tiscali.com> In-Reply-To: <A2205B5D-9928-44F9-B0C3-F82ACB17E9E0@tbwachiat.com> References: <A2205B5D-9928-44F9-B0C3-F82ACB17E9E0@tbwachiat.com>
next in thread | previous in thread | raw e-mail | index | archive | help
> as far as i can see i am not doing anything wrong, then why cant i > download a 551 MB file You're probably hitting the default 512MB maximum process data segment limit somewhere, I guess on the client end as I would expect Apache to use sendfile() to transmit a large file. Try typing: $ ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) 524288 << THIS LIMIT file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 11095 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 65536 cpu time (seconds, -t) unlimited max user processes (-u) 5547 virtual memory (kbytes, -v) unlimited Now, I can never remember how to increase this, and I always have to rummage around the kernel source code. Ah yes, it's options MAXDSIZ=(1024UL*1024*1024) in the kernel configuration. See /usr/src/sys/conf/NOTES However, it seems to me that's the wrong thing to do here. If an application needs to download 1G of data, then it really should download it and spool it to disk as it goes, not spool it all into RAM and then finally write it to disk (or worse, spool it into RAM, which gets spooled to swap space on disk, which then later gets pulled back into RAM and then finally written to the filesystem). At least, that's a very poor utilisation of system resources. Regards, Brian.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050927124256.GA49100>