From owner-freebsd-hackers Wed Feb 1 10:22:34 1995 Return-Path: hackers-owner Received: (from root@localhost) by freefall.cdrom.com (8.6.9/8.6.6) id KAA00441 for hackers-outgoing; Wed, 1 Feb 1995 10:22:34 -0800 Received: from godzilla.zeta.org.au (godzilla.zeta.org.au [203.2.228.34]) by freefall.cdrom.com (8.6.9/8.6.6) with ESMTP id KAA00409; Wed, 1 Feb 1995 10:22:15 -0800 Received: (from bde@localhost) by godzilla.zeta.org.au (8.6.9/8.6.9) id FAA18514; Thu, 2 Feb 1995 05:06:10 +1100 Date: Thu, 2 Feb 1995 05:06:10 +1100 From: Bruce Evans Message-Id: <199502011806.FAA18514@godzilla.zeta.org.au> To: roberto@blaise.ibp.fr, terry@cs.weber.edu Subject: Re: Optimizing CVS? Cc: hackers@freefall.cdrom.com, jkh@freefall.cdrom.com Sender: hackers-owner@FreeBSD.org Precedence: bulk >> When I used CVS under 1.1.5.1 it was very acceptable. Now, when I >> do a cvs update on directories like lib/libc, it is slow. The profile for `cvs -Q update src/lib' on an up to date src/lib is more interesting than for `cvs -Q bin'. Now the stat()s dominate and take too long (3921 usec/call, without even counting idle time, compared with only 500 usec/call for stat()ing 1000 files in one directory). >When talking about ext2fs, it should be noted that it does a whole lot >of caching of meta-data which is not technically legal, but which it gets >away with because the hardware is much more reliable than when UFS was >first put together. Most of the synchronus write through is disabled. >This isn't a fair comparison because the reliability is not comparable. The average reliability of ext2fs is probably much higher. ufs does many more disk writes to update metadata (I guess 20 times as many) so the chance of a failure when something is inconsistent is higher. Bruce --- 0.31 98.88 27703/27703 _Xsyscall [2] [3] 73.6 0.31 98.88 27703 _syscall [3] 0.03 19.12 4885/4885 _stat [17] 0.05 17.94 3449/3449 _open [18] 0.02 14.08 3373/3373 _lstat [21] 0.03 8.45 2203/2203 _access [27] 0.01 7.93 523/523 _select [28] 0.01 6.47 392/5757 _mi_switch [4] 0.03 5.48 2607/2607 _read [32] 0.00 4.40 223/223 _unlink [40] 0.00 3.75 109/109 _rmdir [45] 0.00 3.61 109/109 _mkdir [47] 0.02 3.47 1311/1311 _getdirentries [50] 0.04 1.35 3134/3134 _close [63] 0.00 1.28 331/331 _chdir [66] 0.00 0.22 37/37 _execve [115] 0.02 0.19 2438/2438 _fstat [117] 0.01 0.20 525/525 _write [119] 0.00 0.13 7/7 _wait4 [137] 0.00 0.12 63/63 _mmap [140] 0.00 0.11 5/5 _vfork [147] 0.11 0.00 27535/28649 _copyin [144] 0.00 0.04 2/2 _fork [201] 0.00 0.03 338/338 _obreak [223] 0.00 0.03 2/2 _sigsuspend [225] 0.00 0.01 7/7 _exit [280] 0.00 0.01 12/12 ___sysctl [284] 0.01 0.00 877/877 _lseek [304] 0.01 0.00 876/876 _fcntl [307] 0.00 0.01 12/12 _munmap [326] 0.00 0.00 919/1265 _fuword [352] 0.00 0.00 5/5 _dup2 [366] 0.00 0.00 29/29 _sigaction [375] 0.00 0.00 392/5765 _setrunqueue [239] 0.00 0.00 117/117 _getpid [383] 0.00 0.00 11/11 _setitimer [402] 0.00 0.00 16/16 _mprotect [414] 0.00 0.00 7/7 _gettimeofday [431] 0.00 0.00 7/7 _postsig [438] 0.00 0.00 7/7 _sigreturn [453] 0.00 0.00 15/15 _getuid [480] 0.00 0.00 1/1 _ioctl [483] 0.00 0.00 13/13 _geteuid [496] 0.00 0.00 7/7 _getgid [500] 0.00 0.00 7/7 _getegid [513] 0.00 0.00 4/4 _seteuid [517] 0.00 0.00 4/4 _sigprocmask [520] 0.00 0.00 5/51004 _splz [299] 0.00 0.00 7/14 _issignal [499] 0.00 0.00 2/2 _getpgrp [546] ... ----------------------------------------------- 0.03 19.12 4885/4885 _syscall [3] [17] 14.2 0.03 19.12 4885 _stat [17] 0.04 17.66 4885/14719 _namei [8] 0.01 1.06 4544/24339 _vput [31] 0.22 0.00 4544/15108 _copyout [79] 0.03 0.11 4544/10849 _vn_stat [95] ... granularity: each sample hit covers 4 byte(s) for 0.00% of 142.08 seconds % cumulative self self total time seconds seconds calls us/call us/call name 66.7 94.831 94.831 5068 18712 18712 _idle [5] 11.6 111.283 16.453 _cputime [20] 5.1 118.566 7.282 _mcount (1506) 4.1 124.333 5.767 _user [29] 2.8 128.296 3.963 _mexitcount [41] 0.5 129.055 0.758 56757 13 569 _ufs_lookup [11] 0.5 129.775 0.720 15108 48 48 _copyout [79] 0.3 130.158 0.383 14744 26 3500 _lookup [9] 0.3 130.533 0.375 56757 7 9 _cache_lookup [87] 0.2 130.850 0.317 34278 9 9 _malloc [98] 0.2 131.162 0.311 135438 2 2 _ufs_unlock [100] 0.2 131.469 0.308 27703 11 3580 _syscall [3] 0.2 131.752 0.282 29302 10 17 _brelse [88] 0.2 132.030 0.279 82837 3 4 _ufs_lock [97] 0.2 132.281 0.251 34255 7 7 _free [109] 0.2 132.525 0.244 83037 3 3 _bcmp [110] 0.2 132.762 0.237 21811 11 11 ___qdivrem [111] 0.2 132.994 0.232 93983 2 3 _vrele [112] 0.2 133.221 0.227 28630 8 23 _hardclock [81] 0.2 133.439 0.217 43731 5 5 _timeout [116] ... 0.0 139.974 0.031 4885 6 3921 _stat [17]