From owner-svn-doc-all@freebsd.org Sat Jul 15 00:47:55 2017 Return-Path: Delivered-To: svn-doc-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D23D5DAFD6D; Sat, 15 Jul 2017 00:47:55 +0000 (UTC) (envelope-from bjk@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AC34B80AFF; Sat, 15 Jul 2017 00:47:55 +0000 (UTC) (envelope-from bjk@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v6F0lsgk070065; Sat, 15 Jul 2017 00:47:54 GMT (envelope-from bjk@FreeBSD.org) Received: (from bjk@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id v6F0lsNq070064; Sat, 15 Jul 2017 00:47:54 GMT (envelope-from bjk@FreeBSD.org) Message-Id: <201707150047.v6F0lsNq070064@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: bjk set sender to bjk@FreeBSD.org using -f From: Benjamin Kaduk Date: Sat, 15 Jul 2017 00:47:54 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-all@freebsd.org, svn-doc-head@freebsd.org Subject: svn commit: r50497 - head/en_US.ISO8859-1/htdocs/news/status X-SVN-Group: doc-head X-SVN-Commit-Author: bjk X-SVN-Commit-Paths: head/en_US.ISO8859-1/htdocs/news/status X-SVN-Commit-Revision: 50497 X-SVN-Commit-Repository: doc MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-all@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "SVN commit messages for the entire doc trees \(except for " user" , " projects" , and " translations" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 15 Jul 2017 00:47:55 -0000 Author: bjk Date: Sat Jul 15 00:47:54 2017 New Revision: 50497 URL: https://svnweb.freebsd.org/changeset/doc/50497 Log: Add 2017Q2 Ceph entry from Willem Jan Withagen Modified: head/en_US.ISO8859-1/htdocs/news/status/report-2017-04-2017-06.xml Modified: head/en_US.ISO8859-1/htdocs/news/status/report-2017-04-2017-06.xml ============================================================================== --- head/en_US.ISO8859-1/htdocs/news/status/report-2017-04-2017-06.xml Sat Jul 15 00:22:08 2017 (r50496) +++ head/en_US.ISO8859-1/htdocs/news/status/report-2017-04-2017-06.xml Sat Jul 15 00:47:54 2017 (r50497) @@ -1021,4 +1021,145 @@ to upstream when they break TensorFlow on &os;. + + + Ceph on &os; + + + + + Willem Jan + Withagen + + wjw@digiware.nl + + + + + Ceph Main Site + Main Repository + My &os; Fork + + + +

Ceph is a distributed object store and file system designed to provide + excellent performance, reliability and scalability.

+ +
    +
  • Object Storage

    + +

    Ceph provides seamless access to objects using native + language bindings or radosgw, a REST interface + that is compatible with applications written for S3 and + Swift.

  • + +
  • Block Storage

    + +

    Ceph's RADOS Block Device (RBD) provides access to block + device images that are striped and replicated across the + entire storage cluster.

  • + +
  • File System

    + +

    Ceph provides a POSIX-compliant network file system that + aims for high performance, large data storage, and maximum + compatibility with legacy applications.

  • +
+ +

I started looking into Ceph because the HAST solution with + CARP and ggate did not really do what I was looking + for. But I aim to run a Ceph storage cluster of storage nodes + that are running ZFS. User stations would be running + bhyve on RBD disks that are stored in Ceph.

+ +

Compiling for &os; will now build most of the tools + available in Ceph.

+ +

The most important changes since the last report are:

+ +
    +
  • Ceph has releassed the release candidate of v12.1.0 (aka + Luminous); the corresponding packaging is sitting in my tree + waiting for Luminous to be actually released.
  • + +
  • ceph-fuse works, and allows mounting of + cephfs filesystems. The speed is not impressive, + but it does work.
  • + +
  • rbd-ggate is available to create a Ceph + rdb backed device. rbd-ggate was + submitted by Mykola Golub. That works in a rather simple + fashion, once a cluster is functioning, with rdb + import and rdb-gate map creating + ggate-like devices backed by the Ceph cluster.
  • +
+ +

Other improvements since the previous report:

+ +
    +
  • Some bugs in the init-ceph code (needed for + rc.d) are being fixed.
  • + +
  • RBD and rados are functioning.
  • + +
  • The needed compatability code was written so that &os; and + Linux daemons can operate together in a single cluster.
  • + +
  • More of the awkward dependancies on Linux-isms are deleted + —only /bin/bash is there to stay.
  • +
+ +

Looking forward, the next official release of Ceph is called + Luminous (v12.1.0). As soon as it is available from upstream, + a port will be made provided for &os;.

+ +

To get things running on a &os; system, run pkg install + net/ceph-devel or clone https://github.com/wjwithagen/ceph, + check out the wip.freebsd.201707 branch, and build + manually by running ./do_freebsd.sh in the checkout + root.

+ +

Parts not (yet) included:

+ +
    +
  • KRBD — but rbd-ggate is usable in its + stead
  • + +
  • BlueStore — &os; and Linux have different AIO APIs, + and that incompatibility needs to be resolved somehow. + Additionally, there is discussion in &os; about + aio_cancel not working for all device types.
  • +
+ + + + Run integration tests to see if the &os; daemons will work + with a Linux Ceph platform. + + Investigate the keystore, which can be embedded in the + kernel on Linux and currently prevents building Cephfs and + some other parts. The first question is whether it is really + required, or only KRBD requires it. + + Scheduler information is not used at the moment, because the + schedulers work rather differently between Linux and &os;. + But at a certain point in time, this will need some attention + (in src/common/Thread.cc). + + Improve the &os; init scripts in the Ceph stack, both for + testing purposes and for running Ceph on production machines. + Work on ceph-disk and ceph-deploy to make it + more &os;- and ZFS-compatible. + + Build a test cluster and start running some of the + teuthology integration tests on it. Teuthology wants to build + its own libvirt and that does not quite work with all + the packages &os; already has in place. There are many + details to work out here. + + Design a vitual disk implementation that can be used with + bhyve and attached to an RBD image. + +