From owner-svn-doc-all@freebsd.org Sun Apr 23 03:15:50 2017 Return-Path: Delivered-To: svn-doc-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6B231D40D5C; Sun, 23 Apr 2017 03:15:50 +0000 (UTC) (envelope-from bjk@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 21EE436A; Sun, 23 Apr 2017 03:15:50 +0000 (UTC) (envelope-from bjk@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v3N3Fnpb039847; Sun, 23 Apr 2017 03:15:49 GMT (envelope-from bjk@FreeBSD.org) Received: (from bjk@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id v3N3Fn8B039846; Sun, 23 Apr 2017 03:15:49 GMT (envelope-from bjk@FreeBSD.org) Message-Id: <201704230315.v3N3Fn8B039846@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: bjk set sender to bjk@FreeBSD.org using -f From: Benjamin Kaduk Date: Sun, 23 Apr 2017 03:15:49 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-all@freebsd.org, svn-doc-head@freebsd.org Subject: svn commit: r50196 - head/en_US.ISO8859-1/htdocs/news/status X-SVN-Group: doc-head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-all@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "SVN commit messages for the entire doc trees \(except for " user" , " projects" , and " translations" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 23 Apr 2017 03:15:50 -0000 Author: bjk Date: Sun Apr 23 03:15:49 2017 New Revision: 50196 URL: https://svnweb.freebsd.org/changeset/doc/50196 Log: Add 2017Q1 Ceph entry from Willem Jan Withagen Modified: head/en_US.ISO8859-1/htdocs/news/status/report-2017-01-2017-03.xml Modified: head/en_US.ISO8859-1/htdocs/news/status/report-2017-01-2017-03.xml ============================================================================== --- head/en_US.ISO8859-1/htdocs/news/status/report-2017-01-2017-03.xml Sat Apr 22 18:07:25 2017 (r50195) +++ head/en_US.ISO8859-1/htdocs/news/status/report-2017-01-2017-03.xml Sun Apr 23 03:15:49 2017 (r50196) @@ -1045,4 +1045,143 @@ etc. + + + Ceph on &os; + + + + + Willem Jan + Withagen + + wjw@digiware.nl + + + + + Ceph Main Site + Main Repository + My &os; Fork + + + +

Ceph is a distributed object store and file system designed to provide + excellent performance, reliability and scalability.

+ +
    +
  • Object Storage

    + +

    Ceph provides seamless access to objects using native + language bindings or radosgw, a REST interface + that is compatible with applications written for S3 and + Swift.

  • + +
  • Block Storage

    + +

    Ceph’s RADOS Block Device (RBD) provides access to block + device images that are striped and replicated across the + entire storage cluster.

  • + +
  • File System

    + +

    Ceph provides a POSIX-compliant network file system that + aims for high performance, large data storage, and maximum + compatibility with legacy applications.

  • +
+ +

I started looking into Ceph, because the HAST solution with + CARP and ggate did not really do what I was looking + for. But I aim to run a Ceph storage cluster of storage nodes + that are running ZFS. User stations would be running + bhyve on RBD disk that are stored in Ceph.

+ +

The &os; build will build most of the tools in Ceph.

+ +

The most notable progress since the last report:

+ +
    +
  • The most important change is that a port has been + submitted: net/ceph-devel. However, it does not + yet contain ceph-fuse.
  • + +
  • Regular updates to the ceph-devel port are + expected, with the next one coming in April.
  • + +
  • ceph-fuse works, allowing one to mount a CephFS + filesystem on a &os; system and perform normal operations.
  • + +
  • ceph-disk prepare and activate work for + FileStore on ZFS, allowing for easy creation of OSDs.
  • + +
  • RBD is actually buildable and can be used to manage RADOS BLOCK + DEVICEs.
  • + +
  • Most of the awkward dependancies on Linux-isms are deleted + — only /bin/bash is here to stay.
  • +
+ +

To get things running on a &os; system, run pkg install + net/ceph-devel or clone https://github.com/wjwithagen/ceph + and build manually by running ./do_freebsd.sh in the + checkout root.

+ +

Parts not (yet) included:

+ +
    +
  • KRBD: Kernel Rados Block Devices are implemented in the + Linux kernel, but not yet in the &os; kernel. It is possible + that ggated could be used as a template, since it + does similar things, just between two disks. It also has a + userspace counterpart, which could ease development.
  • + +
  • BlueStore: &os; and Linux have different AIO APIs, and + that incompatibility needs to resolved somehow. Additionally, + there is discussion in &os; about aio_cancel not + working for all devicetypes.
  • + +
  • CephFS as native filesystem: though ceph-fuse + works, it can be advantageous to have an in-kernel + implementation for heavy workloads. Cython tries to access + an internal field in struct dirent, which does not + compile.
  • +
+ + + + Run integration tests to see if the &os; daemons will work + with a Linux Ceph platform. + + Compile and test the userspace RBD (Rados Block Device). + This currently works but testing has been limitted. + + Investigate and see if an in-kernel RBD device could be + developed akin to ggate. + + Investigate the keystore, which can be embedded in the + kernel on Linux and currently prevents building Cephfs and + some other parts. The first question is whether it is really + required, or only KRBD requires it. + + Scheduler information is not used at the moment, because the + schedulers work rather differently between Linux and &os;. + But at a certain point in time, this will need some attention + (in src/common/Thread.cc). + + Improve the &os; initscripts in the Ceph stack, both for + testing purposes and for running Ceph on production machines. + Work on ceph-disk and ceph-deploy to make it + more &os;- and ZFS- compatible. + + Build a test cluster and start running some of the + teuthology integration tests on it. Teuthology wants to build + its own libvirt and that does not quite work with all + the packages &os; already has in place. There are many + details to work out here. + + Design a vitual disk implementation that can be used with + bhyve and attached to an RBD image. + +