Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 19 Apr 2003 21:34:51 +0200
From:      Marko Zec <zec@tel.fer.hr>
To:        Terry Lambert <tlambert2@mindspring.com>
Cc:        freebsd-stable@FreeBSD.org
Subject:   Re: PATCH: Forcible delaying of UFS (soft)updates
Message-ID:  <200304192134.51484.zec@tel.fer.hr>
In-Reply-To: <3EA19303.1DB825C8@mindspring.com>
References:  <3E976EBD.C3E66EF8@tel.fer.hr> <200304191153.03970.zec@tel.fer.hr> <3EA19303.1DB825C8@mindspring.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Saturday 19 April 2003 20:18, Terry Lambert wrote:
> Say you stop the clock for 30 seconds: syncer_delayno is not
> incremented during those 30 seconds.  Now, during that time,
> vn_syncer_add_to_worklist() is called once a second to add
> workitems.  Say they are the same workitems (delay 0, delay 6).
> Now (relative to the original syncer_delayno), the buckets that
> are represented by "syncer_workitem_pending[syncer_delayno+delay]"
>
> vn_syncer_add_to_worklist() instance
>
> |             syncer_workitem_pending[original_syncer_delayno + N]
> |  0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3
> | 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
> | 6
>
> v
> 0  1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> 0 [your patch:]
> 30 30          30
> [not your patch:]
>    1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1
> 0
>
> Your patch causes us to get two single buckets 30 deep.
>
> Not having your patch gives us 37 buckets; 12 are 1 deep, 18 are 2
> deep.

The whole purpose of the patch is to delay disk writes when running on battery 
power. In such a case it is completely irellevant whether the buckets get 
more or less evenly distributed over all the delay queues, or they get 
concentrated in only two (or more precisely: in three). In either case, all 
the queues will be flushed as quickly as possible when the disk gets spinned 
up, in order for the disk to be active for the shortest possible time.

> Does this make sense now?  It is about insertion in the face of a
> stopped clock, and how bursty the resulting "catchup" will be.

And that is exactly what the user of battery powered laptop wants - to have 
infrequent but bursty writes to disk, and an idle disk at all other times. I 
have claimed such a functionality from my very first post. This is a feature, 
not a bug. What's wrong with that?

> If you look at the code, you will see that there is no opportunity
> for other code to run in a single bucket list traversal, but in the
> rushjob case of multiple bucket traversals, the system gets control
> back in between buckets, so the operation of the system is much,
> much smoother in the case that individual buckets are not allowed
> to get too deep.  This is normally accomplished by incrementing the
> value of syncer_delayno once per second, as a continuous function,
> rather than a bursty increment once every 30 seconds.

I completely agree with you that smoothness will be sacrificed, but again, 
please do have in mind the original purpose of the patch. When running on 
battery power, smoothness is a bad thing. When running on AC, the patch will 
become inactive, so 100% normal operation is automatically restored, and you 
get all the smoothness back.

> Please read the above, specifically the diagram of bucket list
> depths with a working clock vs. a stopped clock, and the fact
> that the bucket list traversals are atomic, but multiple bucket
> traversals of the same number of equally distributed work items
> are not.

True. But this still doesn't justify your claims from previous posts that the 
patched system is likely to corrupt data or crash the system. I am still 
pretty much convinced it will do neither of these two things, both by looking 
at the scope of the modifications the patch introduces, and from my 
experience with a production system running all the time on a patched kernel.

Cheers,

Marko



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200304192134.51484.zec>