From owner-freebsd-fs@freebsd.org Thu Dec 27 18:37:30 2018 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id DFD81141F415 for ; Thu, 27 Dec 2018 18:37:29 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com [IPv6:2a00:1450:4864:20::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4669486B26 for ; Thu, 27 Dec 2018 18:37:29 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by mail-lj1-x22c.google.com with SMTP id q2-v6so16921042lji.10 for ; Thu, 27 Dec 2018 10:37:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Gwac9yOv9LGxARYKTBYIJjyab+q3thvr71nXBPJwBvk=; b=k7n/Pzq29EPPGRpSvf2TxHuInkb8Dp2sZveVv4GHc14g1BR0E2CfWnkLP3nNAIIS+S yfqbOVnGBPi4TeSAy2+xGntM28F+ciAF7HgpzSR3OjPHF8XRk7oRoo1a4BYJJluB3zIL A8FLw8p0Rl1mjEPtRDrid0HCkDGHbpRSEwB3LTYDCKKPvZtwUzKzjek7/YTjCyiAvztY 5/1ZfubYcK3GERkAUv9mdQ2yAR3x0a39WEdUYMxNz+FeK0YMmcrii1B1R28BNarg7jZc K6/XhINMtv9DlrLbC9lBNma3eX9gY9wLa3Vl8EqJLbaBsFgu96O4Sxe8U2AOWFhZvKsw Ep0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Gwac9yOv9LGxARYKTBYIJjyab+q3thvr71nXBPJwBvk=; b=G8RY+zmcZb76KCsdN6zPFG18ELfMyaerI4LJrLh+2LKaDrFNh7tplobkm6SVF8lyOT eEiyVoQtQpbgekpLwcZ13iHc02Pr0764ifHas3oVTanGocFjkEXGPDko4R1jMRtWpFH+ DoGHiTE8DchlTsnW0nWCiw6hNnmOUyn7nBlffkDE/GxkrG/+gRHtHkKEDnXYc4Xg4x6e 97IeGCIj1FoGS74bNohf3dSZAsrpcB5d5l8dK+GWyoJsGKfGCP+uX3AJOFXc0ikijSzD 5N+5WIsCNR9ZF3JgDmgYpZH7GhnAcMy+3BgVKn/LH5cvB89iIwcu1OWkv1IbjXOfqo3L hoPA== X-Gm-Message-State: AJcUukfU1vX1Of3Pskt1FDaG5AdbvMkxcZgeBk60SItpAK9iiLF1qjXO NrWhCfJWJ9tqJLkD86sAa+GrGSMBpstIhzw3aDnBlw== X-Google-Smtp-Source: ALg8bN5KCy7crFPbEr/xsO3p+phsA4Q8jeLFJVoHa9YHpL5k9T7VdPAXJV1icl77HFLQjWOwD4mavpZv0+Dch3ELzXU= X-Received: by 2002:a2e:81a:: with SMTP id 26-v6mr15605374lji.14.1545935847684; Thu, 27 Dec 2018 10:37:27 -0800 (PST) MIME-Version: 1.0 References: <4f816be7-79e0-cacb-9502-5fbbe343cfc9@denninger.net> <3160F105-85C1-4CB4-AAD5-D16CF5D6143D@ifm.liu.se> In-Reply-To: From: Freddie Cash Date: Thu, 27 Dec 2018 10:37:15 -0800 Message-ID: Subject: Re: Suggestion for hardware for ZFS fileserver To: Willem Jan Withagen Cc: Sami Halabi , FreeBSD Filesystems X-Rspamd-Queue-Id: 4669486B26 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-6.99 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; NEURAL_HAM_SHORT(-0.99)[-0.991,0]; REPLY(-4.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0] Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Dec 2018 18:37:30 -0000 On Thu, Dec 27, 2018, 2:55 AM Willem Jan Withagen On 22/12/2018 15:49, Sami Halabi wrote: > > Hi, > > > > What sas hba card do you recommend for 16/24 internal ports and 2 > external > > that are recognized and work well with freebsd ZFS. > > There is no real advise here, but what I saw is that it is relatively > easy to overload a lot of the busses involved int his. > > I got this when building Ceph clusters on FreeBSD, where each disk has > its own daemon to hammer away on the platters. > > The first bottleneck is the disk "backplane". It you do not need to wire > every disk with a dedicated HBA-disk cable, then you are sharing the > bandwidth on the backplane between all the disks. and dependant on the > architecture on the backplane serveral disk share one expander. And the > feed into that will be share by the disks attached to that expander. > Some expanders will have multiple inputs from the HBA, but I seen cases > where 4 sas lanes go in and only 2 get used. > You can get backplanes that use multi-lane SFF-8087 connectors and cables between the HBA and backplane, but provide individual connections to each drive bay. You get the best of both worlds (individual connections to each drive, but only 1 cable for every 4 drives. :) No expanders or port multipliers involved. Supermicro 836A backplane is an example of that. It's what we use for all our ZFS and iSCSI boxes. AMD Epyc motherboards provide lots of PCIe slots and lanes to stuff with HBAs, without worrying about bottlenecking. :) -- Cheers, Freddie Typos due to phone keyboard.