Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 15 Dec 2006 00:05:28 +0100
From:      Fabian Wenk <fabian@wenks.ch>
To:        freebsd-amd64@freebsd.org
Subject:   Re: Areca Weirdness
Message-ID:  <4581D8B8.7090408@wenks.ch>
In-Reply-To: <070f01c704bc$d66933d0$0200a8c0@lfarr>
References:  <070f01c704bc$d66933d0$0200a8c0@lfarr>

next in thread | previous in thread | raw e-mail | index | archive | help
Hello Lawrence

Sorry for the late answer, I hope it still can help you.

Lawrence Farr wrote:
> I've got an Areca 12 port card running a 6Tb array which is divided
> into 2.1Tb chunks at the moment, as it was doing the same with a
> single 6Tb partition.

> If I newfs it, and copy data to it, I have no problem initially.
> If I then try and copy the data on the disk already to a new
> folder, the machine reboots (it's a remote host with no serial
> attached currently). When it comes back to life, it mounts, and
> shows as:
> 
> /dev/da0       2.1T    343G    1.6T    18%    /usr/home/areca1
> 
> But is completely empty. Unmounting it and trying to fsck it
> errors, as does mounting it by hand.

Strange.

I had recently build a system running FreeBSD/amd64 6.1-RELEASE 
with 3 Areca controller. 1x 4-port with 2x 80 GB Disks mirrored 
for the system disk and 2x 12-port with total 24x 500 GB Disks for 
the data partition. This system is used as a backup server.

On each 12-port controller I created one raidset with all 12 disks 
(6000 GB) and on top of this one volume with RAID-6 (5000 GB). In 
FreeBSD I concatenated this two together with ccd using just the 
raw /dev/da1 and /dev/da2 devices.

In the beginning, when I still could playing around with the 
system, I tried a few things. Once I also did try to use the raw 
5000 GB disk exported from the controller, but I could not create 
partitions and slices on it, bsdlabel has a problem with 
partitions over 1 TB. Maybe this is the problem you are running 
into. Maybe you could try using ccd with just one disk.

The other large filesystem (from 4.5 TB to almost 10 TB) I already 
maintain also use ccd. They are running with 2x 12-port 3Ware and 
3x 8-port ICP (older generation) controller for the data 
partitions. From the performance side of view I like the Areca 
very much, 'tar -zxf ports.tar.gz' takes only around 25 seconds 
(controller with BBU and write cache enabled, filesystem with ccd 
on the 24 disks on 2 controllers with RAID-6). 3Ware are slow with 
smaller files, and the new ICP (now Adaptec) are not supported 
with FreeBSD (which does not make me unhappy ;). The old ICP only 
support SATA-1, which today also is a minus.

 From Areca (ask the support) you can get the 64bit version of the 
cli tool for FreeBSD. When the world is build with 
'WITH_LIB32=yes' in /etc/make.conf, also the cli32 tool will run.

I also once installed FreeBSD/amd64 on a systems with only ICP 
controller. It runs just find, but the tool to monitor the RAID 
did not work, ICP/Adaptec does not have any 64bit version of it 
and does only support 32bit. So this forced me to install again 
with FreeBSD/i386 on that system. May it could run with the 
'WITH_LIB32=yes' option in /etc/make.conf, but I did not know that 
back then. I don't remember exactly, but the tools from ICP 
complained first about some missing library, which I copied from a 
32bit system, but in the end it complained about not being able to 
connect to the controller.

> Are there any known issues with the driver on AMD64? I had
> major issues with it on Linux/386 with large memory support
> (it would behave equally strangely) that went away when I
> took large memory support out, maybe there are some non 64
> bit safe parts common to both?

I guess, on Linux you will need a very new 2.6.x Kernel to be able 
to use the Areca controller. Then the next problem could be 
support for filesystems over 2TB, which probably also need to be 
enabled.

I also had some interesting effects with "disks" at around 3 TB on 
the ICP controller. The FreeBSD Kernel had the disks, but only 
with 2TB. It was easy to solve, on the ICP controller a RAID-5 can 
be split to 2 virtual disks which the OS then will see. As I used 
ccd anyway, this was no big deal.

On the older 3Ware (8xxx) I could only create a RAID-5 with max 
2TB (on the controller), so with 12x 250GB disks I had to build 2x 
1.25TB RAID to export as disks to the OS. With the 9xxx 3Ware 
filesystem over 2TB where possible, but ended at around 2.5TB, 
which just worked with 12x 250GB disk and RAID-5.

As I now know, building and using large (eg. over 2TB) filesystems 
always gives some headaches. Also doing a fsck on large 
filesystems can use a large amount of memory. Then the option 
'kern.maxdsiz="2147483648"' (2GB) in /boot/loader.conf comes in 
handy. As default a process can only grow up to 512MB. If fsck 
starts to use swap, then you need more memory. I once let a fsck 
run which grow up to 1.5GB with 1GB memory, after around 24h it 
was not finished. I then put in more memory (+1GB) and restarted 
fsck, it was done in around 8 hours (4.5TB on the 8xxx 3Ware). I 
don't know how long a fsck on the Areca will take, I will see it, 
when I once have to do it.


bye
Fabian



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4581D8B8.7090408>