Date: Thu, 28 Apr 2011 19:51:17 -0700 From: Rumen Telbizov <telbizov@gmail.com> To: Jeremy Chadwick <freebsd@jdc.parodius.com> Cc: Denny Schierz <linuxmail@4lin.net>, Alexander Motin <mav@freebsd.org>, FreeBSD Stable <freebsd-stable@freebsd.org> Subject: Re: MPS driver: force bus rescan after remove SAS cable Message-ID: <BANLkTin2MQZVevwaTM_H1R1iMwECiNzZ3g@mail.gmail.com> In-Reply-To: <20110428032347.GA15220@icarus.home.lan> References: <20110427125736.GA1977@icarus.home.lan> <mailpost.1303911582.5772290.15344.mailing.freebsd.stable@FreeBSD.cs.nctu.edu.tw> <4DB8381B.4030408@FreeBSD.org> <BANLkTim5BJLQ_mRTPFJADFEeSe=2BQpqng@mail.gmail.com> <20110428032347.GA15220@icarus.home.lan>
next in thread | previous in thread | raw e-mail | index | archive | help
Jeremy: > I don't mean to sound critical, but why do you guys do this? The reason > I ask: on actual production filers (read: NetApps), you don't go yanking > out the FC cable between the HBA and the NA and expect everything to "be > happy" afterwards. Most SAN administrators tend to reboot an appliance > when doing this kind of work -- because this kind of work is considered > maintenance. > I have just realized that I didn't respond with what I intended to. Sorry about that. What I meant to add to the discussion yesterday was that ejecting a single disk and plugging it back in does not cause (at least in my case) the block device to re-appear again. I haven't tried unplugging the whole cable/backplane. Don't see the point indeed. > I understand what you folks are reporting is a problem. I'm just > wondering why you're complaining about having to reboot a machine with > an HBA in it after doing this kind of *physical* cabling work. My > immediate thought is "I'm really not surprised". I guess some other > people *are* surprised. :-) > Again I missed the point and didn't respond properly. > > Also identify function doesn't work from the OS (no problem > > via the card BIOS). Don't remember having any luck with sg3_util > > package either but worth trying again. > > I don't use SAS myself, but wouldn't the command be "inquiry" and not > "identify"? "identify" is for ATA (specifically SATA via CAM), while > "inquiry" is for SCSI. Where SAS fits into this is unknown to me. Well I have SATA disks visible as /dev/da* . From camcontrol(8): inquiry Send a SCSI inquiry command (0x12) to a device. By default, camcontrol will print out the standard inquiry data, device serial number, and transfer rate information. The user can specify that only certain types of inquiry data be printed: Example: # camcontrol inquiry /dev/da47 pass48: <ATA WDC WD2003FYYS-0 0D02> Fixed Direct Access SCSI-5 device pass48: Serial Number WD-WMAUR0408496 pass48: 300.000MB/s transfers, Command Queueing Enabled It's a SATA disk in this case attached to SAS/SATA backplane and SAS2008 HBA chip (9211-8i) What I need is a way to light on the fault led on the disk that I want to identify (point to) This is usually what I need when I send a DC technician to replace a disk. For which I though I should be using: identify Send a ATA identify command (0xec) to a device. >From my experience SAS or SATA disks - I always get those as /dev/da* disks. It's a combo controller and backplane. So which is the correct way of identifying a disk? > On a related note: recently LSI released version 9.0 of their firmware > > for SAS2008 and I found it fixes certain performance problems with > > SuperMicro backplanes! > > In another thread, or a PR, if you could provide those technical details > that would be beneficial. There are a very large number of FreeBSD > users who use Supermicro server-class hardware, and I'm certain they > would be interested in a full disclosure. > What I meant was that it fixes problems not specific to FreeBSD. I don't have much more to add and don't think that a separate thread is required for this here (since it's not directly FreeBSD specific) but in a nutshell the issue that I was experiencing was that when I connect a 9211-8i to a 6Gbit/s SAS expander the performance/bandwidth was terrible and I couldn't get more than 200 MB/s of off the disk array in sequential access even when the disks were in a simple raid0 setup. With the release of version 9.0 everything is pretty good and am able to achieve gigabyte speeds in sequential access. Another bug they fixed which wasn't too bad but still ... is that each lane in a multilane cable (8087) to the backplane was reported as a separate connection so all the disks were visible 4 times (via 4 different expanders) even though there's only 1 multilane cable connected to 1 backplane. Again both those are fixed in 9.0. I hope this helps. Cheers, -- Rumen Telbizov http://telbizov.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BANLkTin2MQZVevwaTM_H1R1iMwECiNzZ3g>