Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 24 Mar 2010 08:10:02 +0200
From:      Andriy Gapon <avg@freebsd.org>
To:        Andrew Snow <als@modulus.org>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: on st_blksize value
Message-ID:  <4BA9ACBA.4080608@freebsd.org>
In-Reply-To: <4BA954A6.9030505@modulus.org>
References:  <4BA8CD21.3000803@freebsd.org> <4BA954A6.9030505@modulus.org>

next in thread | previous in thread | raw e-mail | index | archive | help
on 24/03/2010 01:54 Andrew Snow said the following:
> Andriy Gapon wrote:
> 
>> One practical benefit can be with ZFS: if a filesystem has recordsize
>> > PAGE_SIZE
>> (e.g. default 128K) and it has checksums or compression enabled, then
>> (over-)writing in blocks smaller than recordsize would require reading
>> of a whole
>> record first. 
> 
> Not strictly true: in ZFS the recordsize setting is for the maximum size
> of a record, it can still write smaller than this.  If you overwrite 1K
> in the middle of a 128K record then it should just be writing a 1K
> block.  Each block has its own checksum attached to it so there's no
> need to recalculate checksums for data that isn't changing.

I must admit that know almost zero about ZFS internals, but I see a logical
problem in your explanation - if the original data was written as a single 128K
block, and if changing a 1K range within it would result in a new 1K block, then
the original data is still affected as it needs to account that the range is now
stored in a different block.

Perhaps, I am just misunderstanding what you said.

But you perhaps you were referring to the case of (over)writing a small _file_
as opposed to the case of overwriting a small range within a large file?

-- 
Andriy Gapon



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4BA9ACBA.4080608>