Date: Tue, 11 Sep 2012 09:40:13 -0700 From: Tim Gustafson <tjg@soe.ucsc.edu> To: freebsd-hardware@freebsd.org Subject: Adaptec 51645 JBOD Message-ID: <CAG27QgR4cmuih-d8uYky7Vhgr%2Bm9Mr=EtMkLuAO1L-pUc7hZTw@mail.gmail.com>
next in thread | raw e-mail | index | archive | help
Hi, I have an Adaptec 51645 with 16 disks attached to it configured in a zpool. The machine was running FreeBSD 8.1. On Saturday, I rebooted this machine for the first time in about 421 days, and the zpool did not come back up. (Thankfully, the OS was on a separate zpool mirror, which came up just fine). Here are the dmesg entries related to the Adaptec card: aac0: <Adaptec RAID 51645> mem 0xf5c00000-0xf5dfffff irq 18 at device 0.0 on pci5 aac0: Enabling 64-bit address support aac0: Enable Raw I/O aac0: Enable 64-bit array aac0: New comm. interface enabled aac0: Adaptec 51645, aac driver 2.1.9-1 Arcconf reports that all 16 drives are attached to the card, configured at JBOD disks, and are totally happy. But, I have no disk devices from this controller in /dev and zpool reports that all the vdev members are unavailable. Somewhere in the back of my mind is a little bell ringing that the driver for this particular card did not support JBOD disks, so that maybe I must have configured them on the Adaptec card as single-disk volumes and then added them that way, but it seems odd that every single disk would come up as "JBOD" after a reboot. There was no power event that would have fried this card - indeed, the reason I shut the machine down in the first place was so that the campus maintenance folks could work on a transformer, during which this entire server room was running on backup generator and UPS at reduced capacity. Is there a way to tell the Adaptec card to re-import the disks as however they were configured before? Why would my configuration have gotten wiped out? Is there some "commit the changes to your RAID card NVRAM" operation that I forgot to do 421 days ago, and now all is lost? In order to see if this was a driver issue, I did upgrade the machine to FreeBSD 9.0, but that did not help. The odd thing is that doing so changed the output of "zpool status" to: NAME STATE READ WRITE CKSUM jails UNAVAIL 0 0 0 raidz1-0 UNAVAIL 0 0 0 993249040670530816 UNAVAIL 0 0 0 was /dev/da0 101352666830918296 UNAVAIL 0 0 0 was /dev/da1 7490690064963814786 UNAVAIL 0 0 0 was /dev/da2 13924510904345345941 UNAVAIL 0 0 0 was /dev/da3 4013204832063755390 UNAVAIL 0 0 0 was /dev/da4 6436589046534957596 UNAVAIL 0 0 0 was /dev/da5 14500669618010181738 UNAVAIL 0 0 0 was /dev/da6 12694081810231399908 UNAVAIL 0 0 0 was /dev/da7 raidz1-1 UNAVAIL 0 0 0 5434688327590400459 UNAVAIL 0 0 0 was /dev/da8 17670575082229357147 UNAVAIL 0 0 0 was /dev/da9 5479144358025821516 UNAVAIL 0 0 0 was /dev/da10 18069338760597396722 UNAVAIL 0 0 0 was /dev/da11 8544715284509422949 UNAVAIL 0 0 0 was /dev/da12 16642679029912355123 UNAVAIL 0 0 0 was /dev/da13 12021597569291021429 UNAVAIL 0 0 0 was /dev/da14 6034088543236281626 UNAVAIL 0 0 0 was /dev/da15 I'm not familiar with those large integers; I'm more used to seeing GPT ID numbers, or device names. Thanks! -- Tim Gustafson tjg@soe.ucsc.edu 831-459-5354 Baskin Engineering, Room 313A
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAG27QgR4cmuih-d8uYky7Vhgr%2Bm9Mr=EtMkLuAO1L-pUc7hZTw>