From owner-freebsd-questions@freebsd.org Fri Apr 22 19:42:44 2016 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C7AFEB18124 for ; Fri, 22 Apr 2016 19:42:44 +0000 (UTC) (envelope-from m.e.sanliturk@gmail.com) Received: from mail-oi0-x232.google.com (mail-oi0-x232.google.com [IPv6:2607:f8b0:4003:c06::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 89A7E1FF1 for ; Fri, 22 Apr 2016 19:42:44 +0000 (UTC) (envelope-from m.e.sanliturk@gmail.com) Received: by mail-oi0-x232.google.com with SMTP id k142so126675991oib.1 for ; Fri, 22 Apr 2016 12:42:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=K/tUNAdMcDIKJqv7Yh6dmauR2/lxddTarJ/JdojyN7o=; b=mdgJxEKkatfKh8dX8YUu1SsN8SFETGZLc5QxbtXfKebq1fJNl4oLR4B97JwwHZzSW6 kPlJXq//GWH//yMIdlztO/hOfXsr6vjMR0tAhYtEJ1gj7AagLBvLSl9XFJ8BFPSR/RRv nDqcUHgPgssbZPJmhZP70qdlhUByvAChtzwNwZakAiZf8PTJqNdfWd7SGI4tMugL7kyV 3nQo69oBPAmsjB7/X2griAv2RJMwR7WeJ9HkpshPy1ugaBytrWGK7hb/KqJRMzch8lOu 9BcEiNE6AMdTpicCHc/1gjivR2RCGhI4dSgcUpzgtTJ6CfcGe5X07RmJjY5E/ek2mXld OJ1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=K/tUNAdMcDIKJqv7Yh6dmauR2/lxddTarJ/JdojyN7o=; b=LfIy/e9eHAWLBZode82DNheiSsoiHhkeLL2VGcw/lrOaafa6MRqeujPoVIHtR8wdtB JB3IvnDuNnl4B4vUT01PCbw0apy8KtCxj9KPsu9X1AKKmrLuvbXev33Ij47O2zkm4kef X31TP5I1a7WJAiLjLC2CJ3MtH/yc3ZEg/wkj2mQmMNx2VxAaNCHvwRMdnxCa4+O6Fn1u Et8ER7G4Lwn70LS5YHT2G8GSNuUYcaroTBiDZ27KvKV9+PyLv4UTPHSHnXh6NTIL5Xjf Hyp5O8jBkbhS5sR0VFuSIDXOU1gDwhR2aTCFgONCjRNgZ2GQigwGSWdQie4hfqMudsuW hvXg== X-Gm-Message-State: AOPr4FUyvIRB62xn7zgphtF0+nFxdaT12e+d0L3udmjbJfqXtT9qWbHBjvzqPFflUN3rQiyafVB0BDVWHC9Y+w== MIME-Version: 1.0 X-Received: by 10.157.4.72 with SMTP id 66mr10223965otc.141.1461354163837; Fri, 22 Apr 2016 12:42:43 -0700 (PDT) Received: by 10.157.45.194 with HTTP; Fri, 22 Apr 2016 12:42:43 -0700 (PDT) In-Reply-To: <29462.128.135.52.6.1461352625.squirrel@cosmo.uchicago.edu> References: <29462.128.135.52.6.1461352625.squirrel@cosmo.uchicago.edu> Date: Fri, 22 Apr 2016 12:42:43 -0700 Message-ID: Subject: Re: Storage cluster advise, anybody? From: Mehmet Erol Sanliturk To: galtsev@kicp.uchicago.edu Cc: FreeBSD Questions Mailing List Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.21 X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 22 Apr 2016 19:42:44 -0000 On Fri, Apr 22, 2016 at 12:17 PM, Valeri Galtsev wrote: > Dear Experts, > > I would like to ask everybody: what would you advise to use as a storage > cluster, or as a distributed filesystem. > > I made my own research of what I can do, but I hit a snag with my > seemingly best choice, so I decided to stay away from it finally, and ask > clever people what they would use. > > My requirements are: > > 1. I would like to have one big (say, comparable to petabyte) filesystem, > accessible on more than one machine, composed of disk space leftovers on a > bunch of machines having 1 gigabit per second ethernet connections > > 2. It can be a bit slow, as filesystem one would need for backups onto it > (say, using bacula or bareos), and/or for long term storage of large > datasets, portions of which can be copied over to faster storage for > processing if necessary. I would be thinking in 1-2 TB of data written to > it daily. > > 3. It would be great to have it single machine failure/reboot resilient > > 4. metadata machines should be redundant (or at least backup medatada host > should be manually convertible into master metadata host if fatal failure > to master or corruption of its data happens) > > > What I would like to avoid/exclude: > > 1. Proprietary commercial solutions, as: > > a. I would like to stay on as minimal budget as possible > b. I want to be able to predict that it will exist for long time, and I > have better experience with my predictions of this sort about open source > projects as opposed to proprietary ones > > 2. Open source solutions using portions of proprietary closed source > binaries/libraries (e.g., I would like to stay away from google > proprietary code/binaries/libraries/modules) > > 3. Kernel level modifications. I really would like to have this > independent of OS as much as I can, or rather available on multiple OSes > (though I do not like Java based things - just my personal experience with > some of them). I have a bunch of Linux boxes and a bunch of FreeBSD boxes, > and I do not want to exclude neither of them if possible. Also, the need > to have custom Linux kernel specifically scares me: Linux kernels get > critical updates often, and having customizations lagging behind the need > of critical update is as unpleasant as rebooting the machine because of > kernel update is. > > I'm not too scared of a "split nature" projects: proprietary projects > having open source satellite. I have mixed experience with those, using > open source satellite I mean. Some of them are indeed not neglected, and > even though you may be missing some features commercial counterpart has, > some are really great ones: they are just missing commercial support, and > maybe having a bit sparse documentation, thus making you to invest more > effort into making it work, which I don't mind: I can earn my sysadmin's > salary here. I would say I more often had good experience with those than > bad one (and I have a list of early indications of potential bad outcome, > so I can more or less predict my future with this kind of projects). > > > I really didn't mean to write this, but I figure it probably will surface > once I start getting your advices, so here it is. I did my research having > my requirements in mind and came up with the solution: moosefs. It is not > reviewed much, no reviews with criticism at all, and not much you can ("I > could" I should say) find howtos about customizations, performance tuning > etc. It installs without a hitch. It runs well, until you start stress > writing a lot to it in parallel, then it started performing exponentially > badly for me. Here is where extensive attempts to find performance tuning > documentation faces lack of success. What made my decision to never ever > use it in a future was the following. I started migrating data back from > moosefs to local UFS (that is FreeBSD box) filesystem using rsync command. > What I observed was: source files after they have been touched by rsync > changed their timestamps. As if instead of creation timestamp it is an > access timestamp on moosefs. This renders rsync from moosefs useless, as > you can not re-run failed rsync, and you obliterate some of metadata of > the source ("creation" timestamp). I wrote e-mail to sourceforge moosefs > mail list, mentioning all this and the fact that I am using open source > moosefs. Next day they replied asking whether I use version 3."this" or > version 3."that", as they want to know in which of them they have a bug. > Whereas latest open source version they have everywhere, including > sourceforge is older version: 2.0.88. > Basically, my decision was made. Sorry for venting it out here, but I > figured, it will happen some moment when I will get your advises. > > > Thanks a lot for all your advises! > > Valeri > > ++++++++++++++++++++++++++++++++++++++++ > Valeri Galtsev > Sr System Administrator > Department of Astronomy and Astrophysics > Kavli Institute for Cosmological Physics > University of Chicago > Phone: 773-702-4247 > ++++++++++++++++++++++++++++++++++++++++ > > > > > _______________________________________________ > > > Another alternative may be https://www.gluster.org/ https://en.wikipedia.org/wiki/GlusterFS https://en.wikipedia.org/wiki/List_of_file_systems#Distributed_parallel_fault-tolerant_file_systems https://en.wikipedia.org/wiki/Comparison_of_distributed_file_systems Mehmet Erol Sanliturk