Date: Fri, 30 Mar 2007 08:10:40 +1000 (EST) From: Bruce Evans <bde@zeta.org.au> To: Ivan Voras <ivoras@fer.hr> Cc: freebsd-fs@FreeBSD.org Subject: Re: gvirstor & UFS Message-ID: <20070330072950.S15433@delplex.bde.org> In-Reply-To: <euh8vk$vec$1@sea.gmane.org> References: <euca4b$6l8$1@sea.gmane.org> <20070328100536.S6916@besplex.bde.org> <euh5hh$iis$1@sea.gmane.org> <euh8vk$vec$1@sea.gmane.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 29 Mar 2007, Ivan Voras wrote: > Ivan Voras wrote: > >> The file system on the virstor device was created with softupdates >> enables, as shown... > > Without softupdates, the I/O requests fail, with the usual spewing of > kernel messages from g_vfs_done() in the log, but the application (dd) > doesn't receive failure codes. In effect, it looks like the requests are > ignored - they fail, but dd continues pumping more requests. This might be because the writes are only of data, and data writes are async with little error checking in ffs without soft updates. Any error cannot be reported on return from bdwrite() and bawrite() since the write probably hasn't happened then, and there is no mechanism other than kernel messages for reporting errors. I thought that geom was too silent about i/o errors. Actually, it seems to be more verbose than my dscheck() about low level errors (g_vfs_done() always prints something if there is an error) and less verbose about errors that it checks for (g_io_check() never prints anything, and buffers with errors detected by g_io_check() apparently don't get as far as g_vfs_done()). The messages printed by my dscheck() are too verbose but are sometimes useful (especially the details about what caused the error, which g_vfs_done() cannot print because its level is too high). > Something else occurred to me: what if an UFS metadata update (for > example, in cg) fails in this way - that it requires an additional chunk > of physical data that's not available, is there a chance that the fs > will be corrupted? ffs is supposed to detect and handle i/o errors for metadata. It is sloppy for indirect blocks (it uses (void)bwrite() a lot), but hopefully when the write of an indirect block fails the damage is limited to one file. I/o errors in old blocks can easily cause corruption, but for ENOSPC-type errors in new blocks the error handling hopefully aborts writing before any damage is done. This depends on ffs using a safe order of writing and the physical order not being different, which I doubt actually happens, especially for virtual disks -- everything would have to be synchronous or otherwise slow to preserve the order in all lower layers. Bruce
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070330072950.S15433>