From owner-freebsd-fs@FreeBSD.ORG Thu Jul 4 07:31:42 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 60278E2 for ; Thu, 4 Jul 2013 07:31:42 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) by mx1.freebsd.org (Postfix) with ESMTP id 33F3212FF for ; Thu, 4 Jul 2013 07:31:41 +0000 (UTC) Received: from jre-mbp.elischer.org (ppp121-45-226-51.lns20.per1.internode.on.net [121.45.226.51]) (authenticated bits=0) by vps1.elischer.org (8.14.5/8.14.5) with ESMTP id r647VR2Q093003 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Thu, 4 Jul 2013 00:31:32 -0700 (PDT) (envelope-from julian@freebsd.org) Message-ID: <51D524C9.1030700@freebsd.org> Date: Thu, 04 Jul 2013 15:31:21 +0800 From: Julian Elischer User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:17.0) Gecko/20130620 Thunderbird/17.0.7 MIME-Version: 1.0 To: Berend de Boer Subject: Re: EBS snapshot backups from a FreeBSD zfs file system: zpool freeze? References: <87ehbg5raq.wl%berend@pobox.com> <20130703055047.GA54853@icarus.home.lan> <6488DECC-2455-4E92-B432-C39490D18484@dragondata.com> <871u7g57rl.wl%berend@pobox.com> <87mwq34emp.wl%berend@pobox.com> <20130703200241.GB60515@in-addr.com> <87k3l748gb.wl%berend@pobox.com> <20130703233631.GA74698@icarus.home.lan> <87d2qz42q4.wl%berend@pobox.com> In-Reply-To: <87d2qz42q4.wl%berend@pobox.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Jul 2013 07:31:42 -0000 On 7/4/13 7:56 AM, Berend de Boer wrote: >>>>>> "Jeremy" == Jeremy Chadwick writes: > Jeremy> As politely as I can: It sounds like you may have spent > Jeremy> too much time with these types of setups, or believe them > Jeremy> to be "magical" in some way, in turn forgetting the > Jeremy> realities of bare metal and instead thinking "everything > Jeremy> is software". Bzzt. > > Heh. The solution with Amazon is even worse: if things go wrong, > you're screwed. Can't get your disks back. You can't call > anyone. There's no bare metal to touch, and no, they won't let you > into their data centres. > > So I'm actually trying to avoid the magic. > > The only guarantee I basically have is that if I have made an EBS > snapshot of my disk, I can, one day, restore that, and that this > snapshot is stored in some multi-redundancy (magic!) cloud. > > (And obviously you can try to run a mirror in another data centre > using zfs send/recv, yes, will run that too). > > If you go with AWS, there are no phone calls to make. Disk gone is > disk gone. So you need to have working backup strategies in place. put your data on multiple data centers using a panzura box > > -- > All the best, > > Berend de Boer > > > ------------------------------------------------------ > Awesome Drupal hosting: https://www.xplainhosting.com/ >