From owner-freebsd-fs@FreeBSD.ORG Tue May 22 12:45:09 2007 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 423CA16A400 for ; Tue, 22 May 2007 12:45:09 +0000 (UTC) (envelope-from anderson@freebsd.org) Received: from mh2.centtech.com (moat3.centtech.com [64.129.166.50]) by mx1.freebsd.org (Postfix) with ESMTP id 21C4513C44C for ; Tue, 22 May 2007 12:45:09 +0000 (UTC) (envelope-from anderson@freebsd.org) Received: from neutrino.centtech.com (neutrino.centtech.com [10.177.171.220]) by mh2.centtech.com (8.13.8/8.13.8) with ESMTP id l4MCj5M4017431; Tue, 22 May 2007 07:45:05 -0500 (CDT) (envelope-from anderson@freebsd.org) Message-ID: <4652E5CD.6000901@freebsd.org> Date: Tue, 22 May 2007 07:45:01 -0500 From: Eric Anderson User-Agent: Thunderbird 2.0.0.0 (X11/20070521) MIME-Version: 1.0 To: Gore Jarold References: <475187.33232.qm@web63006.mail.re1.yahoo.com> In-Reply-To: <475187.33232.qm@web63006.mail.re1.yahoo.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.88.4/3281/Tue May 22 03:50:43 2007 on mh2.centtech.com X-Virus-Status: Clean X-Spam-Status: No, score=-2.5 required=8.0 tests=AWL,BAYES_00 autolearn=ham version=3.1.6 X-Spam-Checker-Version: SpamAssassin 3.1.6 (2006-10-03) on mh2.centtech.com Cc: freebsd-fs@freebsd.org Subject: Re: VERY frustrated with FreeBSD/UFS stability - please help or comment... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 May 2007 12:45:09 -0000 On 05/21/07 14:16, Gore Jarold wrote: > --- Brooks Davis wrote: > > >>> a) am I really the only person in the world that >> moves >>> around millions of inodes throughout the day ? Am >> I >>> the only person in the world that has ever filled >> up a >>> snapshotted FS (or a quota'd FS, for that matter) >> ? >>> Am I the only person in the world that does a mass >>> deletion of several hundred thousand inodes >> several >>> times per day ? >>> >>> OR: >>> >>> b) am I just stupid ? Is everyone doing this, and >>> there is 3 pages of sysctls and kernel tunes that >>> everyone does to their system when they are going >> to >>> use it this way ? Am I just naive for taking a >>> release and paring down GENERIC and attempting to >> run >>> as-is out of the box without major tuning ? >>> >>> If so, can I see those tunes/sysctls ? >>> >>> I am _really_ hoping that it is (b) ... I would >> much >>> rather look back on all of this frustration as my >> own >>> fault than have the burden of proving all of this >> (as >>> I will no doubt be called upon to do). (1) >>> >>> Thanks. Please add your comments... >> I'd say it's certaintly (a). Consider that a full >> source tree contains >> a few under 85K files so that's a reasionable bound >> on average >> workloads. Deliberatly producing a kernel that >> required tuning to just >> us the APIs without crashing would be stupid and we >> wouldn't go it >> without a very good reason and very large warnings >> all over the place. >> Lousy performance might be expected, but crashing >> wouldn't be. > > > Ok - your initial comments / impression are > reassuring. It's hard to believe that the simple file > movements I do are so alien to mainstream use, but > I'll accept your judgement. > > > >>> (1) just load up 6.2 and cp/rm a few million >> inodes >>> around. Or turn on quotas and fill your >> filesystem >>> up. Kaboom. >> It's not clear to me what you mean by "cp/rm a few >> million inodes >> around." The organization of those inodes into >> files and directories >> could conceviably have a major impact on the >> problem. If you could >> provide a script that fails for you, that would >> really help. > > > Specifically, I have private departmental fileservers > that other fileservers rsync to using Mike Rubel-style > rsync snapshots: > > http://www.mikerubel.org/computers/rsync_snapshots/ > > This means that the remote system runs a script like > this: > > ssh user@host rm -rf backup.2 > ssh user@host mv backup.1 backup.2 > ssh user@host cp -al backup.0 backup.1 > rsync /files user@host:/backup.0 > > The /files in question range from .2 to 2.2 million > files, all told. This means that when this script > runs, it first either deletes OR unlinks up to 2 > million items. Then it does a (presumably) zero cost > move operation. Then it does a hard-link-creating cp > of the same (up to 2 million) items. > > As I write this, I realize this isn't _totally_ > generic, since I am using GNU cp rather than the > built-in FreeBSD cp, but that is _truly_ the extent of > customization on this system. A few quick comments: - why use GNU cp when you can use our own cp? You could do 'cp -Rpl' instead. - You could probably save some time, and use rsyncs '--link-dest' option Eric