Date: Thu, 18 Feb 2010 12:26:33 +0000 From: krad <kraduk@googlemail.com> To: pluknet <pluknet@gmail.com> Cc: freebsd-hackers@freebsd.org, Ivan Voras <ivoras@freebsd.org> Subject: Re: ZFS'inodes' (as reported by 'df -i') running out? Message-ID: <d36406631002180426m3966699dvacc0b993e549cb5b@mail.gmail.com> In-Reply-To: <a31046fc1002180404j28d28291ubae389d2babcdc82@mail.gmail.com> References: <0AC6C93D50BE19A97047BAE3@HPQuadro64.dmpriest.net.uk> <hlj917$5vv$1@ger.gmane.org> <a31046fc1002180404j28d28291ubae389d2babcdc82@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 18 February 2010 12:04, pluknet <pluknet@gmail.com> wrote: > On 18 February 2010 14:41, Ivan Voras <ivoras@freebsd.org> wrote: > > Karl Pielorz wrote: > >> > >> Hi All, > >> > >> I originally posted this in freebsd-fs - but didn't get a reply... I > >> have a number of systems (mostly 7.2-S/amd64) running ZFS. Some of these > >> handle millions of files. > >> > >> I've noticed recently, according to "df -i" I'm starting to run out of > >> inodes on some of them (96% used). > >> > >> e.g. > >> > >> " > >> Filesystem iused ifree %iused Mounted on > >> vol/imap 1726396 69976 96% /vol/imap > >> " > >> > >> > >> I know ZFS doesn't have inodes (think they're znodes), and is capable of > >> handling more files than you can probably sensibly think about on a > >> filesystem - but is "df -i" just getting confused, or do I need to be > >> concerned? > > > > AFAIK ZFS allocates inodes when needed so df -i reports the previously > > allocated value. The number of available inodes should automatically > > grow as you add more files. > > Sorta jfyi. That's what I see on Solaris: > df: operation not applicable for FSType zfs > > > -- > wbr, > pluknet > _______________________________________________ > freebsd-hackers@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-hackers > To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org" > Just wait until you start using dedup and get magically growing disks with df 8)) $ dd if=/dev/urandom of=/tmp/test bs=128k count=1 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00317671 s, 41.3 MB/s $ zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT dedup 1016M 95.5K 1016M 0% 1.00x ONLINE - rpool 148G 103G 45.0G 69% 1.00x ONLINE - $ zfs set dedup=on dedup $ df -h /dedup Filesystem Size Used Avail Use% Mounted on dedup 984M 21K 984M 1% /dedup $ seq 1 1000| while read a ; do cp /tmp/test /dedup/test.$RANDOM; done $ df -h /dedup/ Filesystem Size Used Avail Use% Mounted on dedup 1.1G 116M 984M 11% /dedup $ zpool list dedup NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT dedup 1016M 360K 1016M 0% 921.00x ONLINE - Its only available on opensolaris dev at the moment so dont get to excited, but in a year or so i mat hit freebsd. You will need a beefy machine though with a ssd backed l2arc
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d36406631002180426m3966699dvacc0b993e549cb5b>