From owner-freebsd-current@FreeBSD.ORG Sat Jul 5 05:27:07 2003 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6BF0637B401 for ; Sat, 5 Jul 2003 05:27:07 -0700 (PDT) Received: from grunt2.ihug.co.nz (grunt2.ihug.co.nz [203.109.254.42]) by mx1.FreeBSD.org (Postfix) with ESMTP id ABA2743FEC for ; Sat, 5 Jul 2003 05:27:06 -0700 (PDT) (envelope-from jstockdale@stanford.edu) Received: from 203-173-240-172.adsl.ihug.co.nz (stanford.edu) [203.173.240.172] by grunt2.ihug.co.nz with smtp (Exim 3.35 #1 (Debian)) id 19Ym87-0004w4-00; Sun, 06 Jul 2003 00:27:03 +1200 Date: Sun, 6 Jul 2003 00:27:04 +1200 Mime-Version: 1.0 (Apple Message framework v552) Content-Type: text/plain; charset=US-ASCII; format=flowed From: John Stockdale To: current@freebsd.org Content-Transfer-Encoding: 7bit Message-Id: X-Mailer: Apple Mail (2.552) Subject: Storage Management/Auditing X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2003 12:27:07 -0000 For lack of a better place to query, and a distinct lack of findings in my last few hours of googling, I was hoping someone on the list might be able to enlighten me on the subject of storage system management and auditing. I'm currently admining a low load, high capacity storage array (Raid 5, 1.4TB) and am interested in being able to quickly determine activity (file addition, deletion, etc), file system usage, and any other relevant information regarding the array. I have already covered the hardware status (via 3ware monitoring tools) but have no way beyond df / du of checking usage and changes (which is tedious and shrinks in practicality as the file system grows). I was thinking that I could possibly use AIDE to track the changes, but then again the current port is broken (sending email to maintainer concurrently) so I thought I'd look for a better solution. Any suggestions would be helpful. Thanks. -John