From owner-freebsd-questions@FreeBSD.ORG Mon Apr 3 11:48:55 2006 Return-Path: X-Original-To: freebsd-questions@freebsd.org Delivered-To: freebsd-questions@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id EF25D16A424 for ; Mon, 3 Apr 2006 11:48:54 +0000 (UTC) (envelope-from ldrada@gmail.com) Received: from nproxy.gmail.com (nproxy.gmail.com [64.233.182.191]) by mx1.FreeBSD.org (Postfix) with ESMTP id BE1F143D5C for ; Mon, 3 Apr 2006 11:48:44 +0000 (GMT) (envelope-from ldrada@gmail.com) Received: by nproxy.gmail.com with SMTP id m18so388182nfc for ; Mon, 03 Apr 2006 04:48:43 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=CW/blGL6kFh3YxZrq06lOtFVwWlsOnhlYka2gg+2qPqknsD35vzVMduelJoqdqGuEMBNmlMYYTLltOKKTQOLAaUfIcnpX20mTScv+hAexhWMPjHQoKBsarKpk3OOOU617il2nUFawu5SijHYGITvJ29DgE/01eIHc/sn2BI9BDU= Received: by 10.48.235.7 with SMTP id i7mr470151nfh; Mon, 03 Apr 2006 04:46:46 -0700 (PDT) Received: by 10.48.108.11 with HTTP; Mon, 3 Apr 2006 04:48:42 -0700 (PDT) Message-ID: <5ceb5d550604030448t6ecf22uaef80f13f222c465@mail.gmail.com> Date: Mon, 3 Apr 2006 13:48:42 +0200 From: "Daniel A." To: "Olaf Greve" In-Reply-To: <44310A0D.80607@axis.nl> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <44310A0D.80607@axis.nl> Cc: freebsd-questions Subject: Re: How can I increase the shell's (or specific application's) memory limit? X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Apr 2006 11:48:55 -0000 On 4/3/06, Olaf Greve wrote: > Hi, > > I've got a question which is probably pretty easy to answer: how can I > assign more memory to a PHP script running in a shell and/or in a browser= . > > Some more background info: > I'm building a PHP script that has to retrieve pretty large sets of data > from a remote MySQL database, then process it, and store the results to > a local database. > > The issue: > The script (surprise, surprise) quickly runs out of memory. Now, I have > already tried to increase the memory limit in php.ini (followed by an > Apache restart, of course), but even when setting the limit to something > high like 384MB or so, the script still bails out with a memory limit > error when retrieving as little as some 50MB of data... > > Now, of course I could rewrite my PHP script such that it will retrieve > smaller batches of data, but being a programmer I'm lazy, and I'd rather > simply assign more memory to the script (actually, it's not only due to > laziness, but also due to the fact that the script has to agregate data > etc., and I'd rather have it do that in 1 run for a variety of reasons). > > It seems to me like setting the memory limit in php.ini above a value of > 64MB (or so) doesn't seem to have any effect anymore. My assumption then > is that the memory limit is somehow enforced elsewhere (the shell > perhaps, and/or Apache?). > > Can anyone tell me how to adjust this such that I can successfully > assign say 384MB of memory to PHP scripts ran both from browsers (i.e. > through Apache 2.2 and mod_php) as from the commandline? > > Tnx in advance, and cheers, > Olafo > > _______________________________________________ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.o= rg" > Hi Olaf, Generally, I think it's bad programming practice to retrieve such big datasets if it is possible to do otherwise. Concider this example: I needed an app which would write a file which has a set size. The most obvious way to do this, is to make a random string which would be X bytes long. That, however, is not feasible, and very slow. The solution is to find a data size which is not too small and not too big, so disk I/O and CPU time are balanced well. Now, I suggest that you retrieve only a certain amount of SQL rows at a time, process them, and throw them into your local database. This will make your application a lot faster, by the way.