Date: Mon, 10 Feb 2003 04:00:00 -0800 From: Terry Lambert <tlambert2@mindspring.com> To: phk@phk.freebsd.dk Cc: David Schultz <dschultz@uclink.Berkeley.EDU>, arch@FreeBSD.ORG Subject: Re: Our lemming-syncer caught in the act. Message-ID: <3E479440.D89E90F5@mindspring.com> References: <37473.1044868995@critter.freebsd.dk>
next in thread | previous in thread | raw e-mail | index | archive | help
phk@phk.freebsd.dk wrote: > In message <20030210091317.GD5165@HAL9000.homeunix.com>, David Schultz writes: > >When a large file times out, a significant amount of I/O can be > >generated. This is still far better than the old syncer that > >flushed everything every 30 seconds. The reasons for this > >behavior are explained in src/sys/ufs/ffs/README. After reading > >that, do you still think it makes sense to try to do better? > > Yes, it makes a lot of sense. There is no point in batching up > writes to the point of showing 200 requests off at once then > wait 30 seconds, then do it again etc etc. > > We can and need to do better than that. Are there any statistics on how many requests are prevented by soft updates? Maybe you are really talking about a value that would be best expressed as a ratio? It seems to me that this would be a necessary part of any changes made on the basis of instrumentation that might decrease the interval, but increase the absolute number of requests, as a result. -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3E479440.D89E90F5>