Date: Sat, 22 Mar 2003 11:35:26 +1030 From: Greg 'groggy' Lehey <grog@FreeBSD.org> To: Alexander Haderer <alexander.haderer@charite.de> Cc: Maarten de Vries <mdv@unsavoury.net>, Dirk-Willem van Gulik <dirkx@webweaving.org>, freebsd-questions@FreeBSD.ORG Subject: Re: Three Terabyte Message-ID: <20030322010526.GF75577@wantadilla.lemis.com> In-Reply-To: <5.2.0.9.1.20030321113340.019d12a0@postamt1.charite.de> References: <5.2.0.9.1.20030320125711.019eb9c8@postamt1.charite.de> <20030320111436.N74106-100000@foem.leiden.webweaving.org> <20030320111436.N74106-100000@foem.leiden.webweaving.org> <5.2.0.9.1.20030320125711.019eb9c8@postamt1.charite.de> <5.2.0.9.1.20030321113340.019d12a0@postamt1.charite.de>
next in thread | previous in thread | raw e-mail | index | archive | help
--fwqqG+mf3f7vyBCB Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Friday, 21 March 2003 at 12:57:27 +0100, Alexander Haderer wrote: > At 10:26 21.03.2003 +1030, Greg 'groggy' Lehey wrote: >> On Thursday, 20 March 2003 at 13:13:18 +0100, Alexander Haderer wrote: >>> At 12:53 20.03.2003 +0100, Maarten de Vries wrote: >>>> This would be for backup. Data on about 50 webservers would be backed up >>>> to it on a nightly basis. So performance wouldn't be important. >>> >>> Sure? Consider this: >>> >>> a. >>> Filling 3TB with 1 Mbyte/s lasts more than 800 hours or 33 days. >> >> I do a nightly backup to disk. It's compressed (gzip), which is the >> bottleneck. I get this sort of performance: >> >> dump -2uf - /home | gzip > /dump/wantadilla/2/home.gz >> ... >> DUMP: DUMP: 1254971 tape blocks >> DUMP: finished in 217 seconds, throughput 5783 KBytes/sec >> DUMP: level 2 dump on Thu Mar 20 21:01:31 2003 >> >> You don't normally fill up a backup disk at once, so this would be >> perfectly adequate. I'd expect a system of the kind that Maarten's >> talking about to be able to transfer at least 40 MB/s sequential at >> the disk. That would mean he could backup over 1 TB in an 8 hour >> period. > > Of course you are right. My note a. was meant as a more general hint to > think about transfer rates when dealing with large files/filesystem. > Maarten gave no details about how the webservers are connected with the > backup server. I should have give more details of what I mean: When backing > up 50 Webservers over network to one backup server the network may become a > bottleneck. If you have to use encrypted connections (ssh) because the > webservers are located elsewhere you need CPU power at server side for each > connection. Correct. >>> b. >>> Using ssh + dump/cpio/tar needs CPU power for encryption, especially when >>> multiple clients safe their data at the same time. >> >> You can share the compression across multiple machines. That's what >> was happening in the example above. > > It is a good idea to do compression at the client side. > > As I understand your example /dump/wantadilla/2 is either a local > dir or connected via NFS. The latter requires a local network if you > don't want to do NFS mounts across the Internet. Is this right? Yes. This is just a local network. There's no absolute necessity for NFS, and I certainly wouldn't do it across the Internet. Greg -- See complete headers for address and phone numbers --fwqqG+mf3f7vyBCB Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.0 (FreeBSD) iD8DBQE+e7bWIubykFB6QiMRApjZAJ4lCU3ED6Bw95xiUI08YZLZzqCNngCdH8Sg hpMuMmy4wy+iztCTHd1ORZc= =LyV/ -----END PGP SIGNATURE----- --fwqqG+mf3f7vyBCB-- To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20030322010526.GF75577>