From owner-freebsd-stable@FreeBSD.ORG Sat Dec 22 06:28:49 2007 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F299C16A417 for ; Sat, 22 Dec 2007 06:28:49 +0000 (UTC) (envelope-from maf@splintered.net) Received: from sv1.eng.oar.net (sv1.eng.oar.net [192.148.251.86]) by mx1.freebsd.org (Postfix) with SMTP id B5F2C13C4F3 for ; Sat, 22 Dec 2007 06:28:49 +0000 (UTC) (envelope-from maf@splintered.net) Received: (qmail 82942 invoked from network); 22 Dec 2007 06:28:49 -0000 Received: from dev1.eng.oar.net (HELO ?127.0.0.1?) (192.148.251.71) by sv1.eng.oar.net with SMTP; 22 Dec 2007 06:28:49 -0000 In-Reply-To: <20071222053648.GQ57756@deviant.kiev.zoral.com.ua> References: <20071217102433.GQ25053@tnn.dglawrence.com> <20071220011626.U928@besplex.bde.org> <814DB7A9-E64F-4BCA-A502-AB5A6E0297D3@eng.oar.net> <20071219171331.GH25053@tnn.dglawrence.com> <20071221200810.GY16982@elvis.mu.org> <20071221234347.GS25053@tnn.dglawrence.com> <6D374B4C-0D98-4916-A762-7A85912B3058@splintered.net> <20071222053648.GQ57756@deviant.kiev.zoral.com.ua> Mime-Version: 1.0 (Apple Message framework v752.3) Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed Message-Id: <3647BB78-BA10-432B-A52B-04E402E155CC@splintered.net> Content-Transfer-Encoding: 7bit From: Mark Fullmer Date: Sat, 22 Dec 2007 01:28:31 -0500 To: Kostik Belousov X-Mailer: Apple Mail (2.752.3) Cc: freebsd-net@freebsd.org, freebsd-stable@freebsd.org Subject: Re: Packet loss every 30.999 seconds X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 22 Dec 2007 06:28:50 -0000 On Dec 22, 2007, at 12:36 AM, Kostik Belousov wrote: > On Fri, Dec 21, 2007 at 10:30:51PM -0500, Mark Fullmer wrote: >> The uio_yield() idea did not work. Still have the same 31 second >> interval packet loss. > What patch you have used ? This is hand applied from the diff you sent December 19, 2007 1:24:48 PM EST sr1400-ar0.eng:/usr/src/sys/ufs/ffs# diff -c ffs_vfsops.c ffs_vfsops.c.orig *** ffs_vfsops.c Fri Dec 21 21:08:39 2007 --- ffs_vfsops.c.orig Sat Dec 22 00:51:22 2007 *************** *** 1107,1113 **** struct ufsmount *ump = VFSTOUFS(mp); struct fs *fs; int error, count, wait, lockreq, allerror = 0; - int yield_count; int suspend; int suspended; int secondary_writes; --- 1107,1112 ---- *************** *** 1148,1154 **** softdep_get_depcounts(mp, &softdep_deps, &softdep_accdeps); MNT_ILOCK(mp); - yield_count = 0; MNT_VNODE_FOREACH(vp, mp, mvp) { /* * Depend on the mntvnode_slock to keep things stable enough --- 1147,1152 ---- *************** *** 1166,1177 **** (IN_ACCESS | IN_CHANGE | IN_MODIFIED | IN_UPDATE)) == 0 && vp->v_bufobj.bo_dirty.bv_cnt == 0)) { VI_UNLOCK(vp); - if (yield_count++ == 100) { - MNT_IUNLOCK(mp); - yield_count = 0; - uio_yield(); - goto relock_mp; - } continue; } MNT_IUNLOCK(mp); --- 1164,1169 ---- *************** *** 1186,1192 **** if ((error = ffs_syncvnode(vp, waitfor)) != 0) allerror = error; vput(vp); - relock_mp: MNT_ILOCK(mp); } MNT_IUNLOCK(mp); --- 1178,1183 ---- > > Lets check whether the syncer is the culprit for you. > Please, change the value of the syncdelay at the sys/kern/vfs_subr.c > around the line 238 from 30 to some other value, e.g., 45. After that, > check the interval of the effect you have observed. Changed it to 13. Not sure if SYNCER_MAXDELAY also needed to be increased if syncdelay was increased. static int syncdelay = 13; /* max time to delay syncing data */ Test: ; use vnodes % find / -type f -print > /dev/null ; verify % sysctl vfs.numvnodes vfs.numvnodes: 32128 ; run packet loss test now have periodic loss every 13994633us (13.99 seconds). ; reduce # of vnodes with sysctl kern.maxvnodes=1000 test now runs clean. > > It would be interesting to check whether completely disabling the > syncer > eliminates the packet loss, but such system have to be operated with > extreme caution. >