From owner-freebsd-questions@FreeBSD.ORG Mon Aug 18 06:40:25 2008 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A3BF51065683 for ; Mon, 18 Aug 2008 06:40:25 +0000 (UTC) (envelope-from joze@ilab.si) Received: from mail.kr.sik.si (mail.kr.sik.si [193.2.137.12]) by mx1.freebsd.org (Postfix) with ESMTP id 397E08FC28 for ; Mon, 18 Aug 2008 06:40:25 +0000 (UTC) (envelope-from joze@ilab.si) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.kr.sik.si (Postfix) with ESMTP id D7ABC1214038; Mon, 18 Aug 2008 07:40:37 +0200 (CEST) Received: from mail.kr.sik.si ([127.0.0.1]) by localhost (mail.kr.sik.si [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 13464-04; Mon, 18 Aug 2008 07:40:33 +0200 (CEST) Received: from [192.168.1.140] (unknown [212.235.255.252]) by mail.kr.sik.si (Postfix) with ESMTP id A76531214031; Mon, 18 Aug 2008 07:40:33 +0200 (CEST) Message-ID: <48A91947.7040705@ilab.si> Date: Mon, 18 Aug 2008 08:40:07 +0200 From: Joze Volf User-Agent: Thunderbird 2.0.0.16 (Windows/20080708) MIME-Version: 1.0 To: Bill Moran References: <48A560E1.7080701@ilab.si> <20080815075703.951bacb9.wmoran@potentialtech.com> In-Reply-To: <20080815075703.951bacb9.wmoran@potentialtech.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: by amavisd-new-20030616-p10 (Debian) at kr.sik.si Cc: freebsd-questions@freebsd.org Subject: Re: Large RAID arrays, partitioning X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Aug 2008 06:40:25 -0000 Thanks for your opinion. For now I will stick with the large RAID volume and no slices/partitions. As you said, it makes life less complicated. I think there is no problem about future upgrades supporting large volumes. I guess there will be support for even larger volumes. The more important concern for me is what to do if the capacity needs will rise from a few TB to a few dozen or a few hundreds of TB. I gues there is only one economical solution for my project. Lustre cluster file system. Vinum... As I found a much simpler solution, I think there is no need for implementing it. My personal opinion is that there is no excuse for using software raid solutions on a production server systems (except RAID1 where money is realy tight). Most HW RAID controllers are well supported on Linux and xBSD and the advantages of hot swapable drives, battery powered write cache and high performance XOR IOPs are very important for 24*7 systems. This does not mean that I don't want to mess with vinum. I will when there will bee enough time for it. Regards, Joze Bill Moran wrote: > In response to Joze Volf : >> I have a HP DL320s 2U server with 12 500 GB SATA drives and Smart Array P400 RAID controller. The machine will be a video streaming server for a public library. The system I am installing is 7.0-RELEASE, amd64. >> >> I made 2 RAID6 volumes, one 120GB for the system and one 4.3TB for the streaming media content. The first problem I have encountered is that during installation, the large RAID volume wasn't visible. No problem, because I could install the system to the small 120G volume. >> >> After the base system installation I decided to delete the large volume using the HP ACU and create a few smaller 1TB volumes, which will hopefully be recognized by the kernel. They were, buth when I ran the fdisk from sysinstall it always reported: >> WARNING: A geometry of xxxxxxx/255/32 for da1 is incorrect. Using a more likely geometry. If this geometry is incorrect... > > That always happens. I don't remember the last time I saw a disk where it > _didn't_ complain about that. Don't know the details of what's going on > there, but I've never seen it cause a problem. > >> I was trying to do a few 1TB vinum partitions and tying them together into single concatenated volume (I already did something similar in linux using LVM and it worked great). I had no success. > > Well, can't help you much if you don't describe what you tried to do here. > >> Then I searched the web and found this patch http://yogurt.org/FreeBSD/ciss_large.diff and hoped it will resolve the geometry problem. It did not, but one other thing it should do is allow kernel to get da device for an array > 2TB. It did! > > What version of FreeBSD is this? It looks like this driver has seen > significant redesign in 7-STABLE. > >> I deleted the smaller 1TB volumes and recreated one large 4.3TB RAID volume. The kernel recognized it perfectly as /dev/da1. Great! Then I tried to create a slice using sysintall fdisk and a filesystem using sysinstall label. Nothing but trouble! > > Again, without any details, not much anyone can do to help. > >> I searched the web again and found a possible solution to my problem. I used the "newfs -U -O2 /dev/da1" command to create the filesystem directly on the RAID volume. It worked without a problem. Then I mounted /dev/da1 to /var/media and here is the output of "df -h" command: >> >> Filesystem Size Used Avail Capacity Mounted on >> /dev/da0s1a 4.3G 377M 3.6G 9% / >> devfs 1.0K 1.0K 0B 100% /dev >> /dev/da0s1e 7.7G 12K 7.1G 0% /tmp >> /dev/da0s1f 36G 1.6G 31G 5% /usr >> /dev/da0s1d 58G 25M 53G 0% /var >> /dev/da1 4.3T 4.0K 4.0T 0% /var/media >> >> Is it somehow bad to make a filesystem directly on a storage device such as disk drive or hardware raid volume? > > Yes and no. If you use certain type of disk utilities, such as bootable > CDs that check disk health and what not, they may get confused by the fact > that there is no DOS-style fdisk partition on the disk. > > Otherwise, it works fine. I frequently do this to make my life simpler > (why install partitions when you don't need them?) It also wastes less > disk space (although, who cares about a few hundred bytes on a 4T disk). > Now that you've got it up and running, I'd be more concerned about making > sure your next FreeBSD upgrade will continue to support that sized disk. >