Date: Thu, 20 Oct 2022 09:00:30 -1000 From: "parv/FreeBSD" <parv.0zero9+freebsd@gmail.com> To: freebsd-questions@freebsd.org Subject: Disk enclosure: RAID Machine R4424RM same as 2x SuperChassis 826BE1C-R609JBOD? Message-ID: <CABObuOpSf%2BE7-xa7NentQpV1xY5%2BHcdV-kRGw5tDD9PGdZ_8PA@mail.gmail.com>
next in thread | raw e-mail | index | archive | help
--0000000000001788b605eb7bf48d Content-Type: text/plain; charset="UTF-8" Hi there, Work currently uses 24-bay "RAID Machine ... R4424RM" enclosure ( https://www.pc-pitstop.com/24-bay-12g-expander-enclosure ) with disks in ZFS RAID-Z[23] pools, connected via SFF-8088 or -8644 cable to another computer with a Broadcom/LSI 9300-8e PCIe 3 card in it. My thinking is that if some bays in one of the below SuperMicro units starts to malfunction, then whole of 24-bay single unit would not have be sent to repair or to be thrown out. Would "RAID Machine R4424RM" machine be equivalent to connecting 2 of "SuperChassis 826BE1C-R609JBOD" ( https://www.supermicro.com/ewin/products/chassis/2u/826/sc826be1c-r609jbod ) together and then using as one unit? If that would be possible, what kind of HBAs etc & cable should I be looking at to connect the 2 enclosures together, and then to connect them as one unit to a FreeBSD computer. In the later case I do not how the "mpr" or "mps" drive would behave for one of "Qualified SAS Controllers" listed at SuperMicro URL ... AOC-SAS3-9380-8E: SAS 12Gb/s PCIe 3.0 8-Port MegaRAID Controller AOC-SAS3-9300-8E: SAS 12Gb/s PCIe 3.0 8-Port Host Bus Adapter AOC-SAS3-9500-8E: SAS 12Gb/s PCIe 4.0 8-Port Host Bus Adapter AOC-SAS3-9500-16E: SAS 12Gb/s PCIe 4.0 16-Port Host Bus Adapter AOC-SAS3-9580-8I8E: SAS 12Gb/s PCIe 4.0 8-Port MegaRAID Controller On a related note, does anyone have experience with SuperMicro enclosures & warranty repair? In case of RAID Machine enclosures, we have at least 6-7 I think. In past, in one unit when ~5-6 disk bays malfunctioned (disk did not show up in FreeBSD; disk came up & then dropped again), got it repaired under warranty without issue. One unit, now out of warranty, now has 1 slot malfunction; will be chucked. - parv --0000000000001788b605eb7bf48d Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div style=3D"font-family:monospace" class=3D"gmail_defaul= t">Hi there,</div><div style=3D"font-family:monospace" class=3D"gmail_defau= lt"><br></div><div style=3D"font-family:monospace" class=3D"gmail_default">= Work currently uses 24-bay=20 "RAID Machine ... R4424RM" enclosure<br></div><div style=3D"font-= family:monospace" class=3D"gmail_default">( <a href=3D"https://www.pc-pitst= op.com/24-bay-12g-expander-enclosure">https://www.pc-pitstop.com/24-bay-12g= -expander-enclosure</a> )</div><div style=3D"font-family:monospace" class= =3D"gmail_default">with disks in ZFS RAID-Z[23] pools, connected via SFF-80= 88</div><div style=3D"font-family:monospace" class=3D"gmail_default">or -86= 44 cable to another computer with a Broadcom/LSI 9300-8e</div><div style=3D= "font-family:monospace" class=3D"gmail_default">PCIe 3 card in it.</div><di= v style=3D"font-family:monospace" class=3D"gmail_default"><br></div><div st= yle=3D"font-family:monospace" class=3D"gmail_default"> <div style=3D"font-family:monospace" class=3D"gmail_default"><span class=3D= "gmail_default" style=3D"font-family:monospace">My thinking is that if some= bays in one of the below SuperMicro units</span></div><div style=3D"font-f= amily:monospace" class=3D"gmail_default"><span class=3D"gmail_default" styl= e=3D"font-family:monospace">starts to malfunction, then whole of 24-bay sin= gle unit would not have</span></div><div style=3D"font-family:monospace" cl= ass=3D"gmail_default"><span class=3D"gmail_default" style=3D"font-family:mo= nospace">be sent to repair or to be thrown out.<br></span></div><div style= =3D"font-family:monospace" class=3D"gmail_default"><br></div> </div><div style=3D"font-family:monospace" class=3D"gmail_default">Would &q= uot;RAID Machine R4424RM" machine be equivalent to connecting 2</div><= div style=3D"font-family:monospace" class=3D"gmail_default">of "SuperC= hassis 826BE1C-R609JBOD"</div><div style=3D"font-family:monospace" cla= ss=3D"gmail_default">( <a href=3D"https://www.supermicro.com/ewin/products/= chassis/2u/826/sc826be1c-r609jbod">https://www.supermicro.com/ewin/products= /chassis/2u/826/sc826be1c-r609jbod</a> )</div><div style=3D"font-family:mon= ospace" class=3D"gmail_default">together and then using as one unit? If tha= t would be possible, what</div><div style=3D"font-family:monospace" class= =3D"gmail_default">kind of HBAs etc & cable should I be looking at to c= onnect the 2</div><div style=3D"font-family:monospace" class=3D"gmail_defau= lt">enclosures together, and=C2=A0 then to connect them as one unit to a Fr= eeBSD</div><div style=3D"font-family:monospace" class=3D"gmail_default">com= puter.<br></div><div style=3D"font-family:monospace" class=3D"gmail_default= "><br></div><div style=3D"font-family:monospace" class=3D"gmail_default">In= the later case I do not how the "mpr" or "mps" drive w= ould behave</div><div style=3D"font-family:monospace" class=3D"gmail_defaul= t">for one of "Qualified SAS Controllers" listed at SuperMicro UR= L ...</div><div style=3D"font-family:monospace" class=3D"gmail_default"><br= ></div><div style=3D"font-family:monospace" class=3D"gmail_default">=C2=A0 = AOC-SAS3-9380-8E: SAS 12Gb/s PCIe 3.0 8-Port MegaRAID Controller<br>=C2=A0 = AOC-SAS3-9300-8E: SAS 12Gb/s PCIe 3.0 8-Port Host Bus Adapter<br>=C2=A0 AOC= -SAS3-9500-8E: SAS 12Gb/s PCIe 4.0 8-Port Host Bus Adapter<br>=C2=A0 AOC-SA= S3-9500-16E: SAS 12Gb/s PCIe 4.0 16-Port Host Bus Adapter<br>=C2=A0 AOC-SAS= 3-9580-8I8E: SAS 12Gb/s PCIe 4.0 8-Port MegaRAID Controller<br></div><div s= tyle=3D"font-family:monospace" class=3D"gmail_default"><br></div><div style= =3D"font-family:monospace" class=3D"gmail_default"><br></div><div style=3D"= font-family:monospace" class=3D"gmail_default">On a related note, does anyo= ne have experience with SuperMicro</div><div style=3D"font-family:monospace= " class=3D"gmail_default">enclosures & warranty repair?</div><div style= =3D"font-family:monospace" class=3D"gmail_default"><br></div><div style=3D"= font-family:monospace" class=3D"gmail_default">In case of RAID Machine encl= osures, we have at least 6-7 I think. In past,</div><div style=3D"font-fami= ly:monospace" class=3D"gmail_default">in one unit when ~5-6 disk bays malfu= nctioned (disk did not show</div><div style=3D"font-family:monospace" class= =3D"gmail_default">up in FreeBSD; disk came up & then dropped again), g= ot it repaired under</div><div style=3D"font-family:monospace" class=3D"gma= il_default">warranty without issue. One unit, now out of warranty, now has = 1 slot</div><div style=3D"font-family:monospace" class=3D"gmail_default">ma= lfunction; will be chucked.<span class=3D"gmail_default" style=3D"font-fami= ly:monospace"></span></div><div style=3D"font-family:monospace" class=3D"gm= ail_default"><span class=3D"gmail_default" style=3D"font-family:monospace">= <br></span></div><div style=3D"font-family:monospace" class=3D"gmail_defaul= t"><span class=3D"gmail_default" style=3D"font-family:monospace"><br></span= ></div><div style=3D"font-family:monospace" class=3D"gmail_default">- parv<= /div><div style=3D"font-family:monospace" class=3D"gmail_default"><br></div= ></div> --0000000000001788b605eb7bf48d--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CABObuOpSf%2BE7-xa7NentQpV1xY5%2BHcdV-kRGw5tDD9PGdZ_8PA>