From owner-freebsd-fs@FreeBSD.ORG Mon Feb 3 11:06:45 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D65B1E2 for ; Mon, 3 Feb 2014 11:06:45 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B80111A41 for ; Mon, 3 Feb 2014 11:06:45 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id s13B6jxQ022612 for ; Mon, 3 Feb 2014 11:06:45 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s13B6jlJ022610 for freebsd-fs@FreeBSD.org; Mon, 3 Feb 2014 11:06:45 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 3 Feb 2014 11:06:45 GMT Message-Id: <201402031106.s13B6jlJ022610@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Feb 2014 11:06:46 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/185858 fs [zfs] zvol clone can't see new device o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o kern/121385 fs [unionfs] unionfs cross mount -> kernel panic o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 335 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Feb 3 16:21:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8AA0E1B7 for ; Mon, 3 Feb 2014 16:21:24 +0000 (UTC) Received: from smtpdg8.aruba.it (smtpdg225.aruba.it [62.149.158.225]) by mx1.freebsd.org (Postfix) with ESMTP id E6A381892 for ; Mon, 3 Feb 2014 16:21:23 +0000 (UTC) Received: from cloverinformatica.it ([188.10.129.202]) by smtpcmd03.ad.aruba.it with bizsmtp id MsME1n00v4N8xN401sMEQN; Mon, 03 Feb 2014 17:21:15 +0100 Received: from [192.168.0.99] (MAURIZIO-PC [192.168.0.99]) by cloverinformatica.it (Postfix) with ESMTP id 437F57B4D for ; Mon, 3 Feb 2014 17:21:15 +0100 (CET) Message-ID: <52EFC1EC.7010903@cloverinformatica.it> Date: Mon, 03 Feb 2014 17:21:00 +0100 From: Maurizio Vairani User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1) Gecko/20110929 Thunderbird/7.0.1 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: SDHC Sony SF-8UX =?windows-1252?Q?don=92t_works=2E?= Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Feb 2014 16:21:24 -0000 I'm trying to use this SDHC as a ZFS cache on a Samsung laptop: da0 at umass-sim0 bus 0 scbus3 target 0 lun 0 da0: Removable Direct Access SCSI-0 device da0: Serial Number 058F63666438 da0: 40.000MB/s transfers da0: 7667MB (15702016 512 byte sectors: 255H 63S/T 977C) da0: quirks=0x2 After some minutes of usage this error is displayed : (da0:umass-sim0:0:0:0): WRITE(10). CDB: 2a 00 00 0a 97 9f 00 00 05 00 (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error (da0:umass-sim0:0:0:0): SCSI status: Check Condition (da0:umass-sim0:0:0:0): SCSI sense: UNIT ATTENTION asc:28,0 (Not ready to ready change, medium may have changed) (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 (da0:umass-sim0:0:0:0): Retrying command (per sense data) (da0:umass-sim0:0:0:0): WRITE(10). CDB: 2a 00 00 0a 97 9f 00 00 05 00 (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error (da0:umass-sim0:0:0:0): SCSI status: Check Condition (da0:umass-sim0:0:0:0): SCSI sense: NOT READY asc:3a,0 (Medium not present) (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 (da0:umass-sim0:0:0:0): Error 6, Unretryable error (da0:umass-sim0:0:0:0): READ(10). CDB: 28 00 00 00 00 01 00 00 01 00 (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error (da0:umass-sim0:0:0:0): SCSI status: Check Condition (da0:umass-sim0:0:0:0): SCSI sense: UNIT ATTENTION asc:28,0 (Not ready to ready change, medium may have changed) (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 (da0:umass-sim0:0:0:0): Retrying command (per sense data) (da0:umass-sim0:0:0:0): READ(10). CDB: 28 00 00 00 00 01 00 00 01 00 (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error (da0:umass-sim0:0:0:0): SCSI status: Check Condition (da0:umass-sim0:0:0:0): SCSI sense: NOT READY asc:3a,0 (Medium not present) (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 (da0:umass-sim0:0:0:0): Error 6, Unretryable error GEOM: da0: corrupt or invalid GPT detected. GEOM: da0: GPT rejected -- may not be recoverable. Trying to use the SDHC as a MS-DOS device I have receive the error: (da0:umass-sim0:0:0:0): READ(10). CDB: 28 00 00 4a 00 00 00 00 80 00 (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error (da0:umass-sim0:0:0:0): SCSI status: Check Condition (da0:umass-sim0:0:0:0): SCSI sense: UNIT ATTENTION asc:28,0 (Not ready to ready change, medium may have changed) (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 (da0:umass-sim0:0:0:0): Retrying command (per sense data) (da0:umass-sim0:0:0:0): READ(10). CDB: 28 00 00 4a 00 00 00 00 80 00 (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error (da0:umass-sim0:0:0:0): SCSI status: Check Condition (da0:umass-sim0:0:0:0): SCSI sense: NOT READY asc:3a,0 (Medium not present) (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 (da0:umass-sim0:0:0:0): Error 6, Unretryable error I'm using: [root@ativ ~/bin]# uname -a FreeBSD ativ.local 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r260700: Thu Jan 16 07:03:04 CET 2014 root@ativ.local:/usr/obj/usr/src/sys/NEWCONS amd64 I've tested the card in Mac OS X on a Mac mini and in Windows 8 on the same laptop, without problems. I've only noted that the I/O speed is much lower on the Mac mini. Mac Write: mac-mini:~ maurizio$ sudo dd if=/dev/zero of=/dev/disk1 bs=1048576 count=200 209715200 bytes transferred in 138.019361 secs (1519462 bytes/sec) Mac Read: mac-mini:~ maurizio$ sudo dd of=/dev/null if=/dev/disk1 bs=1048576 count=200 209715200 bytes transferred in 23.719487 secs (8841473 bytes/sec) FreeBSD Write: [root@ativ ~]# dd if=/dev/zero of=/dev/da0 bs=1048576 count=200 209715200 bytes transferred in 13.647264 secs (15366831 bytes/sec) FreeBSD Read: [root@ativ ~]# dd of=/dev/null if=/dev/da0 bs=1048576 count=200 209715200 bytes transferred in 11.050251 secs (18978320 bytes/sec) Windows Write: with DH_Speed 1 min. test 16M/sec Windows Read: with DH_Speed 1 min. test 17.4M/sec Regards, Maurizio From owner-freebsd-fs@FreeBSD.ORG Tue Feb 4 14:01:45 2014 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3576BD28; Tue, 4 Feb 2014 14:01:45 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 19B5F10C2; Tue, 4 Feb 2014 14:01:43 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id PAA03750; Tue, 04 Feb 2014 15:55:10 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WAgSz-0004jt-Tj; Tue, 04 Feb 2014 15:55:09 +0200 Message-ID: <52F0F0EC.5090902@FreeBSD.org> Date: Tue, 04 Feb 2014 15:53:48 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-fs , FreeBSD Current , freebsd-doc@FreeBSD.org Subject: zfs boot manual pages References: <201301251633.r0PGX15j040754@svn.freebsd.org> <5102B7A5.7030105@FreeBSD.org> <51F0F0FE.6030208@FreeBSD.org> <51F60897.1020005@FreeBSD.org> <52E22240.5050501@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Feb 2014 14:01:45 -0000 I've started working on manual pages for the zfs boot chain. Please [p]review my work in progress here: https://github.com/avg-I/freebsd/compare/review;zfs-boot-man-pages Any additions, corrections, suggestions and other kinds of reviewing are welcome. Patches and pull requests are very welcome! Many thanks to Warren Block for the initial review and many fixes. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Feb 4 15:25:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6427A3E1; Tue, 4 Feb 2014 15:25:32 +0000 (UTC) Received: from mail-qc0-x229.google.com (mail-qc0-x229.google.com [IPv6:2607:f8b0:400d:c01::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 132D11880; Tue, 4 Feb 2014 15:25:32 +0000 (UTC) Received: by mail-qc0-f169.google.com with SMTP id w7so14016894qcr.0 for ; Tue, 04 Feb 2014 07:25:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:date:message-id:subject:from:to:content-type; bh=fRhshnpbPLQ/gO1b9tj2CGeNd4hgJR5bbxzsK0CQYOE=; b=WxjaS9ty+DjFQ/ksXL99tYex9xihPPIeMXddLKbPBvi/KtU3CX89UiZFOGDLnbO58T yxGL04VtLxUmCklHVdLDFq75LiI6/AypGvfYDsnAf5mECz7kudjpo/8fLK/qVtgtGf// qtPlivfgjsOweiQ+1XyJe6QaWGz2mJx4Uwxa1BAzT0Wc+wv4L9BLkkvD3HqR8j3jMath 1QpVGZlIFj+k4hiOF31T3qBx8ND3WVU/SLJkci70vzcLTk+jlzFplHgGM6NvS9T1wIC+ LsSKtAXIbCpMZDB8h/9Bc70pDZ0x0CovrQKLNC1dxJWTNR1O0yVV9iuYZuPWKQFHJKem 9HZw== MIME-Version: 1.0 X-Received: by 10.224.124.74 with SMTP id t10mr67184073qar.40.1391527508450; Tue, 04 Feb 2014 07:25:08 -0800 (PST) Sender: tomek.cedro@gmail.com Received: by 10.229.151.73 with HTTP; Tue, 4 Feb 2014 07:25:08 -0800 (PST) Date: Tue, 4 Feb 2014 16:25:08 +0100 X-Google-Sender-Auth: sKaxJkhoI0cRMGncL0yAHNJh2bA Message-ID: Subject: poor fusefs documentation From: CeDeROM To: freebsd-doc@freebsd.org, FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Feb 2014 15:25:32 -0000 Hello :-) I am trying to use various fusefs mounters but the documentation for this seems to be inconsistent and incomplete or missing at all in Handbook. There is only mount_fusefs utility that refers to fuse_daemon that does not exist. It is impossible to mount anything at first contact with fusefs in FreeBSD, please update the documentation :-) Best regards :-) Tomek -- CeDeROM, SQ7MHZ, http://www.tomek.cedro.info From owner-freebsd-fs@FreeBSD.ORG Tue Feb 4 15:34:06 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E25BA830; Tue, 4 Feb 2014 15:34:06 +0000 (UTC) Received: from mail-qa0-x22e.google.com (mail-qa0-x22e.google.com [IPv6:2607:f8b0:400d:c00::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 8C4571961; Tue, 4 Feb 2014 15:34:06 +0000 (UTC) Received: by mail-qa0-f46.google.com with SMTP id ii20so12157056qab.5 for ; Tue, 04 Feb 2014 07:34:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:content-type; bh=YOryhxHWZnr2iFUgkE3nCUIMsqLOlJ03wBq0EfmI+8o=; b=Ik9ep17JIVxyeHKy8mxBmD75QNtcMh1Hl408FZ9qvOGg3Zz1K3dKH47yC5F/y5jcHY BHbzsTRq4KsMv54HR8XXgiYHPV5YqQzIGQ628aVoetLM9cL2CS0tnHEsUdLvz6O5e//i Wo+6ykCvs7+wFlxncV3Vr3dsKfdbnCU8nQ2rsk5yEkDyUJvkXTXbzi5p7PMYB7fK9aqa mc/n3OoHHjyb3FoW3CPWpSACM/dOiDhd2+vY2YEI/9jc+MsRy4Q+A6q86lrrK2z/dYn+ KrtGeLxSHT2U00/kVUQvlxTo00jDi6JYv5BESCfnXtQz62QUiCasCjuluxyIZ2y9m5P0 MrPw== MIME-Version: 1.0 X-Received: by 10.140.94.74 with SMTP id f68mr63530561qge.64.1391528045608; Tue, 04 Feb 2014 07:34:05 -0800 (PST) Sender: tomek.cedro@gmail.com Received: by 10.229.151.73 with HTTP; Tue, 4 Feb 2014 07:34:05 -0800 (PST) In-Reply-To: References: Date: Tue, 4 Feb 2014 16:34:05 +0100 X-Google-Sender-Auth: _aaW3DJ_cjb3IbQm847BlmeLCRU Message-ID: Subject: Re: poor fusefs documentation From: CeDeROM To: freebsd-doc@freebsd.org, FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Feb 2014 15:34:07 -0000 On Tue, Feb 4, 2014 at 4:25 PM, CeDeROM wrote: > I am trying to use various fusefs mounters but the documentation for > this seems to be inconsistent and incomplete or missing at all in > Handbook. There is only mount_fusefs utility that refers to > fuse_daemon that does not exist. It is impossible to mount anything at > first contact with fusefs in FreeBSD, please update the documentation > :-) In example - I need to mount cryptofs or ntfs, I guess mount_fusefs should be used for that with proper "-t" option like in standard mount program or some automatic fs recognition should take place, but instead, I must use dedicated cryptofs application with no manual page, and there is no application for ntfs, apropos can tell nothing about both cryptofs and ntfs. This looks so Linux :-( -- CeDeROM, SQ7MHZ, http://www.tomek.cedro.info From owner-freebsd-fs@FreeBSD.ORG Tue Feb 4 18:25:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CE47930F; Tue, 4 Feb 2014 18:25:49 +0000 (UTC) Received: from mail-qc0-x229.google.com (mail-qc0-x229.google.com [IPv6:2607:f8b0:400d:c01::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7B50E1A3C; Tue, 4 Feb 2014 18:25:49 +0000 (UTC) Received: by mail-qc0-f169.google.com with SMTP id w7so14372232qcr.14 for ; Tue, 04 Feb 2014 10:25:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:content-type; bh=8XnxktX6W9tSVeE3IU7Z1fZr50AGGXDcnk+JKZRxp6o=; b=HSQ5cNXr2X7Yd/Gt9Q6QBjCVmDmPhIngBbPpHv5BM+VUqGeffa2hrJ5Og7PqzKThKv kMNG0A+xEQRXoXDw9FlGcDMhUVl72r/M4eZkXvSLXsV/cMY0j/xua8IQ5qD2x9V1kPr7 H4jThUNDMj378FTErWiOoJqebOeNa3hYBG7ssM6946aXU3cTc8EeByUi/DId9rVagjES JZM7gDdQ7pD07h5XzGTI2GLSnUmTRk3qJK56OOiFNF+5+7WFb9LDBDOm80y7GurMdfSD L98Uj08xdvrBStoURNa4iE+NtC/BlyadZE3UT7/dwD8ewDBkRosHEKMbAWbCayMs08IK f+AQ== MIME-Version: 1.0 X-Received: by 10.224.121.137 with SMTP id h9mr58790160qar.55.1391538348588; Tue, 04 Feb 2014 10:25:48 -0800 (PST) Sender: tomek.cedro@gmail.com Received: by 10.229.151.73 with HTTP; Tue, 4 Feb 2014 10:25:48 -0800 (PST) In-Reply-To: References: Date: Tue, 4 Feb 2014 19:25:48 +0100 X-Google-Sender-Auth: 1rzBG4KDxjiwhU7TkMs01H_b71A Message-ID: Subject: Re: poor fusefs documentation From: CeDeROM To: freebsd-doc@freebsd.org, FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Feb 2014 18:25:49 -0000 On Tue, Feb 4, 2014 at 4:34 PM, CeDeROM wrote: > In example - I need to mount cryptofs or ntfs, I guess mount_fusefs > should be used for that with proper "-t" option like in standard mount > program or some automatic fs recognition should take place, but > instead, I must use dedicated cryptofs application with no manual > page, and there is no application for ntfs, apropos can tell nothing > about both cryptofs and ntfs. This looks so Linux :-( In a perfect situation I would see mount_fusefs to be a frontend for other filesystem loaders/modules, so we only use mount_fusefs no other programs, just like ifconfig works. If any other program/loader has a standalone binary I would consider naming it mount_fusefs_{filesystem} to be coherent with the base and rest of the mount framework. It would be nice if mount_fusefs could detect filesystem if a proper loader is found. I know that fusefs modules/loaders are part of the port tree, but they should also contain man/info, apropos pages and stick to the current FreeBSD naming conventions I guess. This way we could get coherent support for other filesystems provided by fusefs modules. Please keep the FreeBSD ports "the FreeBSD way" not the "Linux way", so many of us switched to FreeBSD because of that integrity issues =) -- CeDeROM, SQ7MHZ, http://www.tomek.cedro.info From owner-freebsd-fs@FreeBSD.ORG Tue Feb 4 19:13:04 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9C259A55; Tue, 4 Feb 2014 19:13:04 +0000 (UTC) Received: from mail-ig0-x233.google.com (mail-ig0-x233.google.com [IPv6:2607:f8b0:4001:c05::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 4872F1FC2; Tue, 4 Feb 2014 19:13:04 +0000 (UTC) Received: by mail-ig0-f179.google.com with SMTP id c10so9205301igq.0 for ; Tue, 04 Feb 2014 11:13:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=jxA2bY1Kpi3xjHQUGA8tb8mHeSTMQmaQA83DVpTHars=; b=XWIEIxQFhANDb5bxw7HhBfyr8hI6/xMpcUo/FFN7A1WJqpD6ZErBiEkN6Ir9cL1vm/ +gWWdn5PwKtWKjpemlcnMcOY9QMXsaKwOmayUSJVxoZI8oq8TBR0rk1p67CRPgoSu2UJ HylrIsNbXxeQpt2Km6Q2COLj0wKtH3qehQiV1JRAq7If4o8yh+cNCxNSsfa2SKDL6XTY 0hBbvzE+uO3IkTSeeRm23yT6yRm2l2UM/jyoTwi4cpZ9H4zqwbD7eI7m88rFgxtVLNOD AnyfHKenfl/4sKKukfe730aeSyQval0mrfAUzC+n1qDfEi8Ak+HtFzvUYIb9H2QFz9M0 KGsQ== MIME-Version: 1.0 X-Received: by 10.42.52.209 with SMTP id k17mr31302452icg.1.1391541183684; Tue, 04 Feb 2014 11:13:03 -0800 (PST) Received: by 10.50.67.84 with HTTP; Tue, 4 Feb 2014 11:13:03 -0800 (PST) In-Reply-To: <52F0F0EC.5090902@FreeBSD.org> References: <201301251633.r0PGX15j040754@svn.freebsd.org> <5102B7A5.7030105@FreeBSD.org> <51F0F0FE.6030208@FreeBSD.org> <51F60897.1020005@FreeBSD.org> <52E22240.5050501@FreeBSD.org> <52F0F0EC.5090902@FreeBSD.org> Date: Tue, 4 Feb 2014 13:13:03 -0600 Message-ID: Subject: Re: zfs boot manual pages From: Scot Hetzel To: Andriy Gapon Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-doc@freebsd.org, FreeBSD Current , freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Feb 2014 19:13:04 -0000 On Tue, Feb 4, 2014 at 7:53 AM, Andriy Gapon wrote: > > I've started working on manual pages for the zfs boot chain. > > Please [p]review my work in progress here: > https://github.com/avg-I/freebsd/compare/review;zfs-boot-man-pages > > Any additions, corrections, suggestions and other kinds of reviewing are > welcome. Patches and pull requests are very welcome! > > Many thanks to Warren Block for the initial review and many fixes. One fix for the gptzfsboot man page would be to mention that gptzfsboot is installed into a GPT partition of type freebsd-boot, and that the -i 1 refers to the GPT index for this partition. -- DISCLAIMER: No electrons were maimed while sending this message. Only slightly bruised. From owner-freebsd-fs@FreeBSD.ORG Tue Feb 4 20:13:56 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BC7367AD; Tue, 4 Feb 2014 20:13:56 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 66C1A1601; Tue, 4 Feb 2014 20:13:56 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.7/8.14.7) with ESMTP id s14KDsoL039959; Tue, 4 Feb 2014 13:13:54 -0700 (MST) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.7/8.14.7/Submit) with ESMTP id s14KDseU039956; Tue, 4 Feb 2014 13:13:54 -0700 (MST) (envelope-from wblock@wonkity.com) Date: Tue, 4 Feb 2014 13:13:54 -0700 (MST) From: Warren Block To: Scot Hetzel Subject: Re: zfs boot manual pages In-Reply-To: Message-ID: References: <201301251633.r0PGX15j040754@svn.freebsd.org> <5102B7A5.7030105@FreeBSD.org> <51F0F0FE.6030208@FreeBSD.org> <51F60897.1020005@FreeBSD.org> <52E22240.5050501@FreeBSD.org> <52F0F0EC.5090902@FreeBSD.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Tue, 04 Feb 2014 13:13:54 -0700 (MST) Cc: freebsd-doc@freebsd.org, FreeBSD Current , freebsd-fs , Andriy Gapon X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Feb 2014 20:13:56 -0000 On Tue, 4 Feb 2014, Scot Hetzel wrote: > On Tue, Feb 4, 2014 at 7:53 AM, Andriy Gapon wrote: >> >> I've started working on manual pages for the zfs boot chain. >> >> Please [p]review my work in progress here: >> https://github.com/avg-I/freebsd/compare/review;zfs-boot-man-pages >> >> Any additions, corrections, suggestions and other kinds of reviewing are >> welcome. Patches and pull requests are very welcome! >> >> Many thanks to Warren Block for the initial review and many fixes. > > One fix for the gptzfsboot man page would be to mention that > gptzfsboot is installed into a GPT partition of type freebsd-boot, and > that the -i 1 refers to the GPT index for this partition. We are missing that from the gptboot.8 page also. gptzfsboot is installed in a freebsd-boot partition, usually the first partition on the disk. A ``protective MBR'' (see gpart(8)) is typically used in combination with gptzfsboot. To install gptzfsboot on the ada0 drive: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 05:01:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B62AADA1 for ; Wed, 5 Feb 2014 05:01:15 +0000 (UTC) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 8A6F21D1A for ; Wed, 5 Feb 2014 05:01:15 +0000 (UTC) Received: from c-66-41-25-68.hsd1.mn.comcast.net ([66.41.25.68] helo=[192.168.0.138]) by mail.physics.umn.edu with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1WAu53-0008Sk-5m for freebsd-fs@freebsd.org; Tue, 04 Feb 2014 22:27:22 -0600 Message-ID: <52F1BDA4.6090504@physics.umn.edu> Date: Tue, 04 Feb 2014 22:27:16 -0600 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=unavailable version=3.3.2 Subject: practical maximum number of drives X-SA-Exim-Version: 4.2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 05:01:15 -0000 This may well be a question with no real answer but since we're speccing out a new ZFS-based storage system, I've been asked what the maximum number of drives it can support would be (for a hypothetical expansion option). While there are some obvious limits such as SAS addressing, I assume there must be more fundamental ones in the kernel or drivers, and the practical limits will be very different from the hypothetical ones. So far the largest system we've built is using three 45-drive chassis on one SAS2008 (mps) controller, so 135 drives total. Over many months of running we had several drives fail and be replaced, and eventually the OS (9.1) failed to assign new da devices. It was time to patch the system and reboot anyway, which solved it, but we did wonder if we were running into some kind of limit around 150 drives - though I don't see why. Interestingly we initially built this system with each drive chassis on its own SAS2008 HBA, but it ultimately behaved better daisy-chained with only one. I think I saw a hint somewhere this could be to do with interrupt sharing... Thanks for any insights, Graham From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 05:36:19 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A4F7741C for ; Wed, 5 Feb 2014 05:36:19 +0000 (UTC) Received: from mail-yh0-x234.google.com (mail-yh0-x234.google.com [IPv6:2607:f8b0:4002:c01::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 5E8691034 for ; Wed, 5 Feb 2014 05:36:19 +0000 (UTC) Received: by mail-yh0-f52.google.com with SMTP id a41so353627yho.39 for ; Tue, 04 Feb 2014 21:36:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=MrEEiGZvOI1u/kjpQHY1nxYKXANrE1HhkDL6hd5zc8c=; b=xcOrANVsg9V6Hhr8/1+IuUk8PmQOcpqCLOQU4myKHuVVmEBfIM5S4QwnQC6i2g6jrW HtDxmwB+f+K9+huIvb9d0DYV/l5EY3jv1oY+f/jDWS9UDCz3owsvLZ6QYpoMNnFa8eQh VhFEC5eLZQJtAvSm9UJqxA9wGPd+PX1/rf88PbM4cWr4I7/8BXJzFVkqAehAgBgRJbGg QMTOQ2PdSfIkFC6Zxoj3SMUdPmeQlLv+341pBokQqlyFpTJhhvOFWg3WDy8bKp8ua0Wu Xsp8kRasC3UGxwFtDyrXB30cRMJ5PdFl0oA4OtRFSn6iFSW03cdnMLnaQ5shf6nrNXBO Y71g== X-Received: by 10.236.200.35 with SMTP id y23mr41666732yhn.38.1391578578498; Tue, 04 Feb 2014 21:36:18 -0800 (PST) Received: from [192.168.1.76] (75-63-29-182.lightspeed.irvnca.sbcglobal.net. [75.63.29.182]) by mx.google.com with ESMTPSA id w8sm91966081yhg.8.2014.02.04.21.36.17 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 04 Feb 2014 21:36:18 -0800 (PST) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\)) Subject: Re: practical maximum number of drives From: aurfalien In-Reply-To: <52F1BDA4.6090504@physics.umn.edu> Date: Tue, 4 Feb 2014 21:36:15 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> References: <52F1BDA4.6090504@physics.umn.edu> To: Graham Allan X-Mailer: Apple Mail (2.1827) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 05:36:19 -0000 Hi Graham, When you say behaved better with 1 HBA, what were the issues that made = you go that route? Also, curious that you have that many drives on 1 PCI card, is it PCI 3 = etc=85 and is saturation an issue? - aurf On Feb 4, 2014, at 8:27 PM, Graham Allan wrote: > This may well be a question with no real answer but since we're = speccing out a new ZFS-based storage system, I've been asked what the = maximum number of drives it can support would be (for a hypothetical = expansion option). While there are some obvious limits such as SAS = addressing, I assume there must be more fundamental ones in the kernel = or drivers, and the practical limits will be very different from the = hypothetical ones. >=20 > So far the largest system we've built is using three 45-drive chassis = on one SAS2008 (mps) controller, so 135 drives total. Over many months = of running we had several drives fail and be replaced, and eventually = the OS (9.1) failed to assign new da devices. It was time to patch the = system and reboot anyway, which solved it, but we did wonder if we were = running into some kind of limit around 150 drives - though I don't see = why. >=20 > Interestingly we initially built this system with each drive chassis = on its own SAS2008 HBA, but it ultimately behaved better daisy-chained = with only one. I think I saw a hint somewhere this could be to do with = interrupt sharing... >=20 > Thanks for any insights, >=20 > Graham > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 06:58:38 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 511B7D93; Wed, 5 Feb 2014 06:58:38 +0000 (UTC) Received: from mail-ig0-x230.google.com (mail-ig0-x230.google.com [IPv6:2607:f8b0:4001:c05::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id EBEAB1623; Wed, 5 Feb 2014 06:58:37 +0000 (UTC) Received: by mail-ig0-f176.google.com with SMTP id j1so11126109iga.3 for ; Tue, 04 Feb 2014 22:58:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=vyvrRww4nGQGaYg/GLCHLbNBwxc+66kWq9YkVAH/P68=; b=QyrENvtp/TVsPhh2hq8n2AOBdzI0fue9ZBdtdFsDCV6AAlQu1i6sqwQ5pGZ3jBS1p6 rkWqR/OJVWbXt0+/zz1Yi/2FhkM+3JqNLSojD1slYkF2dF82h9/EaD4mpquKLyrMfvoN gdni1fWHBm0Yh1mSEZj4fCMqjSDmXjyb9+A6LdpcfeOSn5yu6M+kVjbe5scXA6JcEgNz ILbG7jgiauQCO8o3PrQX6st7o1kQj7RO8H2QCLCX7YgQO8TS+WjYlf77MxM/kEqS5MmV WqvDNlNDnu91RKbXWzaE5h09Jazm+IaA51jvW4k9Ke6saeTqHZNt6a/J6s9JRQwYtBIB 6jDQ== MIME-Version: 1.0 X-Received: by 10.43.153.138 with SMTP id la10mr34833806icc.10.1391583517372; Tue, 04 Feb 2014 22:58:37 -0800 (PST) Received: by 10.50.67.84 with HTTP; Tue, 4 Feb 2014 22:58:37 -0800 (PST) In-Reply-To: References: <201301251633.r0PGX15j040754@svn.freebsd.org> <5102B7A5.7030105@FreeBSD.org> <51F0F0FE.6030208@FreeBSD.org> <51F60897.1020005@FreeBSD.org> <52E22240.5050501@FreeBSD.org> <52F0F0EC.5090902@FreeBSD.org> Date: Wed, 5 Feb 2014 00:58:37 -0600 Message-ID: Subject: Re: zfs boot manual pages From: Scot Hetzel To: Warren Block Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-doc@freebsd.org, FreeBSD Current , freebsd-fs , Andriy Gapon X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 06:58:38 -0000 On Tue, Feb 4, 2014 at 2:13 PM, Warren Block wrote: > On Tue, 4 Feb 2014, Scot Hetzel wrote: > >> On Tue, Feb 4, 2014 at 7:53 AM, Andriy Gapon wrote: >>> >>> >>> I've started working on manual pages for the zfs boot chain. >>> >>> Please [p]review my work in progress here: >>> https://github.com/avg-I/freebsd/compare/review;zfs-boot-man-pages >>> >>> Any additions, corrections, suggestions and other kinds of reviewing are >>> welcome. Patches and pull requests are very welcome! >>> >>> Many thanks to Warren Block for the initial review and many fixes. >> >> >> One fix for the gptzfsboot man page would be to mention that >> gptzfsboot is installed into a GPT partition of type freebsd-boot, and >> that the -i 1 refers to the GPT index for this partition. > > > We are missing that from the gptboot.8 page also. > > gptzfsboot is installed in a freebsd-boot partition, usually the > first partition on the disk. A ``protective MBR'' (see gpart(8)) is > typically used in combination with gptzfsboot. > > To install gptzfsboot on the ada0 drive: > > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 That sounds perfect for both man pages. -- DISCLAIMER: No electrons were maimed while sending this message. Only slightly bruised. From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 07:18:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 103AF158 for ; Wed, 5 Feb 2014 07:18:56 +0000 (UTC) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.21.123]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 741B31760 for ; Wed, 5 Feb 2014 07:18:54 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [193.68.6.1]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.6/8.14.6) with ESMTP id s156mS7Y061503 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Wed, 5 Feb 2014 08:48:28 +0200 (EET) (envelope-from daniel@digsys.bg) Message-ID: <52F1DEBC.9020304@digsys.bg> Date: Wed, 05 Feb 2014 08:48:28 +0200 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: practical maximum number of drives References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> In-Reply-To: <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 07:18:56 -0000 I also wonder how you managed to go over the LSI2008's limit of 112=20 drives... On 05.02.14 07:36, aurfalien wrote: > Hi Graham, > > When you say behaved better with 1 HBA, what were the issues that made = you go that route? > > Also, curious that you have that many drives on 1 PCI card, is it PCI 3= etc=85 and is saturation an issue? > > - aurf > > On Feb 4, 2014, at 8:27 PM, Graham Allan wrote:= > >> This may well be a question with no real answer but since we're specci= ng out a new ZFS-based storage system, I've been asked what the maximum n= umber of drives it can support would be (for a hypothetical expansion opt= ion). While there are some obvious limits such as SAS addressing, I assum= e there must be more fundamental ones in the kernel or drivers, and the p= ractical limits will be very different from the hypothetical ones. >> >> So far the largest system we've built is using three 45-drive chassis = on one SAS2008 (mps) controller, so 135 drives total. Over many months of= running we had several drives fail and be replaced, and eventually the O= S (9.1) failed to assign new da devices. It was time to patch the system = and reboot anyway, which solved it, but we did wonder if we were running = into some kind of limit around 150 drives - though I don't see why. >> >> Interestingly we initially built this system with each drive chassis o= n its own SAS2008 HBA, but it ultimately behaved better daisy-chained wit= h only one. I think I saw a hint somewhere this could be to do with inter= rupt sharing... >> >> Thanks for any insights, >> >> Graham >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 08:08:37 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4DF817D2 for ; Wed, 5 Feb 2014 08:08:37 +0000 (UTC) Received: from mail-pd0-x22a.google.com (mail-pd0-x22a.google.com [IPv6:2607:f8b0:400e:c02::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 225061B11 for ; Wed, 5 Feb 2014 08:08:37 +0000 (UTC) Received: by mail-pd0-f170.google.com with SMTP id p10so56284pdj.1 for ; Wed, 05 Feb 2014 00:08:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=6GrngObxkntJUnB3pd0a62ez/5rR4AlJqZlVapgUWMU=; b=OS7Vq6+lMFRD17rRoS1trVfmecrLFK61S47YzvyZIK5MQJOLyOGGKr7Y/9m0akl45p 0cIhJx/k45duYkE2bA3C67/9AVpSV5FS2Fgj8GzPQ8yTvGjak/pMkxaRPA1R4i9NspKO /wmoUD/IYQCy9JmqLzmGg09D43H7icEc2tUG/T68bTQ281sx+mEGieBL2R7lNnOc/7in iPjkppju5Uzr+4FOy9MnssheZ1KfyiV8OGuGjP7LB0AsHkzVDLVG/eqyAZ4DQlADGWg0 oKqsou8kzkkFzNdMC1XFveo0fIzl/l5sVUgxD8vKsCT06RIOPCJoDMQAa1H4B4Y0l1Ya MGRA== MIME-Version: 1.0 X-Received: by 10.66.163.164 with SMTP id yj4mr54618pab.91.1391587714921; Wed, 05 Feb 2014 00:08:34 -0800 (PST) Received: by 10.68.126.199 with HTTP; Wed, 5 Feb 2014 00:08:34 -0800 (PST) In-Reply-To: <52F1DEBC.9020304@digsys.bg> References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> <52F1DEBC.9020304@digsys.bg> Date: Wed, 5 Feb 2014 03:08:34 -0500 Message-ID: Subject: Re: practical maximum number of drives From: Rich To: Daniel Kalchev Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 08:08:37 -0000 The SAS2008 has a limit of 112 drives? http://www.lsi.com/downloads/Public/SAS%20ICs/LSISAS2008/SCG_LSISAS2008_PB_043009.pdf claims "up to 3000 devices." SAS2008 is a PCIe gen 2 x8 chip. I suspect the bottleneck order would go SAS expander then SAS2008 then PCIe. - Rich On Wed, Feb 5, 2014 at 1:48 AM, Daniel Kalchev wrote: > I also wonder how you managed to go over the LSI2008's limit of 112 > drives... > > > On 05.02.14 07:36, aurfalien wrote: >> >> Hi Graham, >> >> When you say behaved better with 1 HBA, what were the issues that made you >> go that route? >> >> Also, curious that you have that many drives on 1 PCI card, is it PCI 3 >> etc... and is saturation an issue? >> >> - aurf >> >> On Feb 4, 2014, at 8:27 PM, Graham Allan wrote: >> >>> This may well be a question with no real answer but since we're speccing >>> out a new ZFS-based storage system, I've been asked what the maximum number >>> of drives it can support would be (for a hypothetical expansion option). >>> While there are some obvious limits such as SAS addressing, I assume there >>> must be more fundamental ones in the kernel or drivers, and the practical >>> limits will be very different from the hypothetical ones. >>> >>> So far the largest system we've built is using three 45-drive chassis on >>> one SAS2008 (mps) controller, so 135 drives total. Over many months of >>> running we had several drives fail and be replaced, and eventually the OS >>> (9.1) failed to assign new da devices. It was time to patch the system and >>> reboot anyway, which solved it, but we did wonder if we were running into >>> some kind of limit around 150 drives - though I don't see why. >>> >>> Interestingly we initially built this system with each drive chassis on >>> its own SAS2008 HBA, but it ultimately behaved better daisy-chained with >>> only one. I think I saw a hint somewhere this could be to do with interrupt >>> sharing... >>> >>> Thanks for any insights, >>> >>> Graham >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 08:52:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 414428D0 for ; Wed, 5 Feb 2014 08:52:15 +0000 (UTC) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.21.123]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AFCE01039 for ; Wed, 5 Feb 2014 08:52:14 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [193.68.6.1]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.6/8.14.6) with ESMTP id s158qAEh034786 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Wed, 5 Feb 2014 10:52:11 +0200 (EET) (envelope-from daniel@digsys.bg) Message-ID: <52F1FBBA.1000909@digsys.bg> Date: Wed, 05 Feb 2014 10:52:10 +0200 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Rich Subject: Re: practical maximum number of drives References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> <52F1DEBC.9020304@digsys.bg> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 08:52:15 -0000 Ok, two things. First, it was a typo -- the number is 122 devices and I actually got it from the likes of this FAQ entry: http://www.supermicro.com/support/faqs/faq.cfm?faq=10004 I never use these for anything other than HBA. It is interesting to see that LSI claims 3000 devices. Might be, firmware has changed? Or there are different variations of the chip/implementation? Daniel On 05.02.14 10:08, Rich wrote: > The SAS2008 has a limit of 112 drives? > > http://www.lsi.com/downloads/Public/SAS%20ICs/LSISAS2008/SCG_LSISAS2008_PB_043009.pdf > claims "up to 3000 devices." > > SAS2008 is a PCIe gen 2 x8 chip. > > I suspect the bottleneck order would go SAS expander then SAS2008 then PCIe. > > - Rich > > On Wed, Feb 5, 2014 at 1:48 AM, Daniel Kalchev wrote: >> I also wonder how you managed to go over the LSI2008's limit of 112 >> drives... >> >> >> On 05.02.14 07:36, aurfalien wrote: >>> Hi Graham, >>> >>> When you say behaved better with 1 HBA, what were the issues that made you >>> go that route? >>> >>> Also, curious that you have that many drives on 1 PCI card, is it PCI 3 >>> etc... and is saturation an issue? >>> >>> - aurf >>> >>> On Feb 4, 2014, at 8:27 PM, Graham Allan wrote: >>> >>>> This may well be a question with no real answer but since we're speccing >>>> out a new ZFS-based storage system, I've been asked what the maximum number >>>> of drives it can support would be (for a hypothetical expansion option). >>>> While there are some obvious limits such as SAS addressing, I assume there >>>> must be more fundamental ones in the kernel or drivers, and the practical >>>> limits will be very different from the hypothetical ones. >>>> >>>> So far the largest system we've built is using three 45-drive chassis on >>>> one SAS2008 (mps) controller, so 135 drives total. Over many months of >>>> running we had several drives fail and be replaced, and eventually the OS >>>> (9.1) failed to assign new da devices. It was time to patch the system and >>>> reboot anyway, which solved it, but we did wonder if we were running into >>>> some kind of limit around 150 drives - though I don't see why. >>>> >>>> Interestingly we initially built this system with each drive chassis on >>>> its own SAS2008 HBA, but it ultimately behaved better daisy-chained with >>>> only one. I think I saw a hint somewhere this could be to do with interrupt >>>> sharing... >>>> >>>> Thanks for any insights, >>>> >>>> Graham >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 09:24:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8B22694A for ; Wed, 5 Feb 2014 09:24:34 +0000 (UTC) Received: from mail-pa0-x234.google.com (mail-pa0-x234.google.com [IPv6:2607:f8b0:400e:c03::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 606DB1318 for ; Wed, 5 Feb 2014 09:24:34 +0000 (UTC) Received: by mail-pa0-f52.google.com with SMTP id bj1so123262pad.25 for ; Wed, 05 Feb 2014 01:24:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=Eprllk/Rh3NjoCbAiBA5/v7lm/coq/nWeRkx9aFpoFg=; b=QeNDdVGF+3TeP4NxS5ihYKI5AuT0otsjlDENzCC+zkrTIVgikI+Xp+y9JhJOGsRFwa 7wF/lLHkhk2s2q3EIyQvoYq4+YuGDvY9UGAuRbymxM/ybgxSo5Urp+cr2UZ9dtOhVsiG u2XVy/8Bfj4eWc9YPFINZQKYBz70YE3FObZtU9ak4qD5gNh68fZgYCQnGElc1PzMJEMT z/bnHz2Px7f1+aprCeNNla86n7b4CnLYErovWH5Vg+Scsiu085bWTbHZOMP8myPjqOlg ZYCCJe5WnbLxvYcyNua+UNSf4vkrbmRxstyZ6JwK8EDzLJu2cjxdFYiI4j4eAq/+WgBE 6LAw== MIME-Version: 1.0 X-Received: by 10.66.149.231 with SMTP id ud7mr625934pab.8.1391592273432; Wed, 05 Feb 2014 01:24:33 -0800 (PST) Received: by 10.68.126.199 with HTTP; Wed, 5 Feb 2014 01:24:33 -0800 (PST) In-Reply-To: <52F1FBBA.1000909@digsys.bg> References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> <52F1DEBC.9020304@digsys.bg> <52F1FBBA.1000909@digsys.bg> Date: Wed, 5 Feb 2014 04:24:33 -0500 Message-ID: Subject: Re: practical maximum number of drives From: Rich To: Daniel Kalchev Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 09:24:34 -0000 http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9200-8e.aspx Claims 512. http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9211-4i.aspx claims 256. I'm really extraordinarily curious to see what would happen, but do not have >512 unused drives I could attach to a single HBA to find out...easily. I have >256, but since all the stuff I can find other than SM's documentation claims 512 for the SAS2008, that would only refute their statement, not LSI's. - Rich On Wed, Feb 5, 2014 at 3:52 AM, Daniel Kalchev wrote: > Ok, two things. > > First, it was a typo -- the number is 122 devices and I actually got it from > the likes of this FAQ entry: > http://www.supermicro.com/support/faqs/faq.cfm?faq=10004 > I never use these for anything other than HBA. > > It is interesting to see that LSI claims 3000 devices. Might be, firmware > has changed? Or there are different variations of the chip/implementation? > > Daniel > > > On 05.02.14 10:08, Rich wrote: >> >> The SAS2008 has a limit of 112 drives? >> >> >> http://www.lsi.com/downloads/Public/SAS%20ICs/LSISAS2008/SCG_LSISAS2008_PB_043009.pdf >> claims "up to 3000 devices." >> >> SAS2008 is a PCIe gen 2 x8 chip. >> >> I suspect the bottleneck order would go SAS expander then SAS2008 then >> PCIe. >> >> - Rich >> >> On Wed, Feb 5, 2014 at 1:48 AM, Daniel Kalchev wrote: >>> >>> I also wonder how you managed to go over the LSI2008's limit of 112 >>> drives... >>> >>> >>> On 05.02.14 07:36, aurfalien wrote: >>>> >>>> Hi Graham, >>>> >>>> When you say behaved better with 1 HBA, what were the issues that made >>>> you >>>> go that route? >>>> >>>> Also, curious that you have that many drives on 1 PCI card, is it PCI 3 >>>> etc... and is saturation an issue? >>>> >>>> - aurf >>>> >>>> On Feb 4, 2014, at 8:27 PM, Graham Allan wrote: >>>> >>>>> This may well be a question with no real answer but since we're >>>>> speccing >>>>> out a new ZFS-based storage system, I've been asked what the maximum >>>>> number >>>>> of drives it can support would be (for a hypothetical expansion >>>>> option). >>>>> While there are some obvious limits such as SAS addressing, I assume >>>>> there >>>>> must be more fundamental ones in the kernel or drivers, and the >>>>> practical >>>>> limits will be very different from the hypothetical ones. >>>>> >>>>> So far the largest system we've built is using three 45-drive chassis >>>>> on >>>>> one SAS2008 (mps) controller, so 135 drives total. Over many months of >>>>> running we had several drives fail and be replaced, and eventually the >>>>> OS >>>>> (9.1) failed to assign new da devices. It was time to patch the system >>>>> and >>>>> reboot anyway, which solved it, but we did wonder if we were running >>>>> into >>>>> some kind of limit around 150 drives - though I don't see why. >>>>> >>>>> Interestingly we initially built this system with each drive chassis on >>>>> its own SAS2008 HBA, but it ultimately behaved better daisy-chained >>>>> with >>>>> only one. I think I saw a hint somewhere this could be to do with >>>>> interrupt >>>>> sharing... >>>>> >>>>> Thanks for any insights, >>>>> >>>>> Graham >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>>> >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >>> >>> >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 09:42:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6DD691C2 for ; Wed, 5 Feb 2014 09:42:59 +0000 (UTC) Received: from smtpdg7.aruba.it (smtpdg8.aruba.it [62.149.158.238]) by mx1.freebsd.org (Postfix) with ESMTP id A978614E9 for ; Wed, 5 Feb 2014 09:42:57 +0000 (UTC) Received: from cloverinformatica.it ([188.10.129.202]) by smtpcmd03.ad.aruba.it with bizsmtp id NZhl1n0184N8xN401ZhlQR; Wed, 05 Feb 2014 10:41:46 +0100 Received: from [192.168.0.99] (MAURIZIO-PC [192.168.0.99]) by cloverinformatica.it (Postfix) with ESMTP id C6FEBCCA; Wed, 5 Feb 2014 10:41:45 +0100 (CET) Message-ID: <52F2073A.1030608@cloverinformatica.it> Date: Wed, 05 Feb 2014 10:41:14 +0100 From: Maurizio Vairani User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1) Gecko/20110929 Thunderbird/7.0.1 MIME-Version: 1.0 To: Daryl Richards Subject: Re: SDHC Sony SF-8UX =?windows-1252?Q?don=92t_works=2E?= References: <52EFC1EC.7010903@cloverinformatica.it> <52F13A6B.2000002@isletech.net> In-Reply-To: <52F13A6B.2000002@isletech.net> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 09:42:59 -0000 Thank you Daryl using /dev/rdisk1 the Mac-mini is the winner: Mac Read: mac-mini:~ maurizio$ sudo dd of=/dev/null if=/dev/rdisk1 bs=1048576 count=200 209715200 bytes transferred in 2.631920 secs (79681446 bytes/sec) Mac Write: mac-mini:~ maurizio$ sudo dd if=/dev/zero of=/dev/rdisk1 bs=1048576 count=200 209715200 bytes transferred in 11.709435 secs (17909933 bytes/sec) On 04/02/2014 20:07, Daryl Richards wrote: > Just to you since kinda OT: > > On the mac, use /dev/rdiskn instead of /dev/diskn. The rdisk is raw > and works much faster than disk.. > > On 2/3/2014, 11:21 AM, Maurizio Vairani wrote: >> I'm trying to use this SDHC as a ZFS cache on a Samsung laptop: >> da0 at umass-sim0 bus 0 scbus3 target 0 lun 0 >> da0: Removable Direct Access SCSI-0 device >> da0: Serial Number 058F63666438 >> da0: 40.000MB/s transfers >> da0: 7667MB (15702016 512 byte sectors: 255H 63S/T 977C) >> da0: quirks=0x2 >> >> After some minutes of usage this error is displayed : >> (da0:umass-sim0:0:0:0): WRITE(10). CDB: 2a 00 00 0a 97 9f 00 00 05 00 >> (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error >> (da0:umass-sim0:0:0:0): SCSI status: Check Condition >> (da0:umass-sim0:0:0:0): SCSI sense: UNIT ATTENTION asc:28,0 (Not >> ready to ready change, medium may have changed) >> (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 >> (da0:umass-sim0:0:0:0): Retrying command (per sense data) >> (da0:umass-sim0:0:0:0): WRITE(10). CDB: 2a 00 00 0a 97 9f 00 00 05 00 >> (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error >> (da0:umass-sim0:0:0:0): SCSI status: Check Condition >> (da0:umass-sim0:0:0:0): SCSI sense: NOT READY asc:3a,0 (Medium not >> present) >> (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 >> (da0:umass-sim0:0:0:0): Error 6, Unretryable error >> (da0:umass-sim0:0:0:0): READ(10). CDB: 28 00 00 00 00 01 00 00 01 00 >> (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error >> (da0:umass-sim0:0:0:0): SCSI status: Check Condition >> (da0:umass-sim0:0:0:0): SCSI sense: UNIT ATTENTION asc:28,0 (Not >> ready to ready change, medium may have changed) >> (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 >> (da0:umass-sim0:0:0:0): Retrying command (per sense data) >> (da0:umass-sim0:0:0:0): READ(10). CDB: 28 00 00 00 00 01 00 00 01 00 >> (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error >> (da0:umass-sim0:0:0:0): SCSI status: Check Condition >> (da0:umass-sim0:0:0:0): SCSI sense: NOT READY asc:3a,0 (Medium not >> present) >> (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 >> (da0:umass-sim0:0:0:0): Error 6, Unretryable error >> GEOM: da0: corrupt or invalid GPT detected. >> GEOM: da0: GPT rejected -- may not be recoverable. >> >> Trying to use the SDHC as a MS-DOS device I have receive the error: >> (da0:umass-sim0:0:0:0): READ(10). CDB: 28 00 00 4a 00 00 00 00 80 00 >> (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error >> (da0:umass-sim0:0:0:0): SCSI status: Check Condition >> (da0:umass-sim0:0:0:0): SCSI sense: UNIT ATTENTION asc:28,0 (Not >> ready to ready change, medium may have changed) >> (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 >> (da0:umass-sim0:0:0:0): Retrying command (per sense data) >> (da0:umass-sim0:0:0:0): READ(10). CDB: 28 00 00 4a 00 00 00 00 80 00 >> (da0:umass-sim0:0:0:0): CAM status: SCSI Status Error >> (da0:umass-sim0:0:0:0): SCSI status: Check Condition >> (da0:umass-sim0:0:0:0): SCSI sense: NOT READY asc:3a,0 (Medium not >> present) >> (da0:umass-sim0:0:0:0): Command Specific Info: 0xaa5501 >> (da0:umass-sim0:0:0:0): Error 6, Unretryable error >> >> I'm using: >> [root@ativ ~/bin]# uname -a >> FreeBSD ativ.local 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r260700: Thu >> Jan 16 07:03:04 CET 2014 >> root@ativ.local:/usr/obj/usr/src/sys/NEWCONS amd64 >> >> I've tested the card in Mac OS X on a Mac mini and in Windows 8 on >> the same laptop, without problems. I've only noted that the I/O speed >> is much lower on the Mac mini. >> >> Mac Write: >> mac-mini:~ maurizio$ sudo dd if=/dev/zero of=/dev/disk1 bs=1048576 >> count=200 >> 209715200 bytes transferred in 138.019361 secs (1519462 bytes/sec) >> Mac Read: >> mac-mini:~ maurizio$ sudo dd of=/dev/null if=/dev/disk1 bs=1048576 >> count=200 >> 209715200 bytes transferred in 23.719487 secs (8841473 bytes/sec) >> >> FreeBSD Write: >> [root@ativ ~]# dd if=/dev/zero of=/dev/da0 bs=1048576 count=200 >> 209715200 bytes transferred in 13.647264 secs (15366831 bytes/sec) >> FreeBSD Read: >> [root@ativ ~]# dd of=/dev/null if=/dev/da0 bs=1048576 count=200 >> 209715200 bytes transferred in 11.050251 secs (18978320 bytes/sec) >> >> Windows Write: >> with DH_Speed 1 min. test 16M/sec >> Windows Read: >> with DH_Speed 1 min. test 17.4M/sec >> >> Regards, >> Maurizio >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 14:25:31 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 20DD88D9 for ; Wed, 5 Feb 2014 14:25:31 +0000 (UTC) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E36E41E52 for ; Wed, 5 Feb 2014 14:25:30 +0000 (UTC) Received: from c-66-41-25-68.hsd1.mn.comcast.net ([66.41.25.68] helo=[192.168.0.138]) by mail.physics.umn.edu with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1WB3Pr-0000r6-S1 for freebsd-fs@freebsd.org; Wed, 05 Feb 2014 08:25:28 -0600 Message-ID: <52F249D1.50109@physics.umn.edu> Date: Wed, 05 Feb 2014 08:25:21 -0600 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> <52F1DEBC.9020304@digsys.bg> In-Reply-To: <52F1DEBC.9020304@digsys.bg> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=unavailable version=3.3.2 Subject: Re: practical maximum number of drives X-SA-Exim-Version: 4.2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 14:25:31 -0000 dmesg reports a 2308 if that makes any difference. This is using these 45-bay Supermicro chassis, where front and back are separate backplanes, so the front drives are on one bus and the back drives on another. That's a detail which I forgot in my first post - so although it's one HBA, the drives are split across both ports. Graham On 2/5/2014 12:48 AM, Daniel Kalchev wrote: > I also wonder how you managed to go over the LSI2008's limit of 112 > drives... From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 14:43:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 391242A3 for ; Wed, 5 Feb 2014 14:43:00 +0000 (UTC) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 0975D1176 for ; Wed, 5 Feb 2014 14:42:59 +0000 (UTC) Received: from c-66-41-25-68.hsd1.mn.comcast.net ([66.41.25.68] helo=[192.168.0.138]) by mail.physics.umn.edu with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1WB3gm-0001Ug-GK; Wed, 05 Feb 2014 08:42:58 -0600 Message-ID: <52F24DEA.9090905@physics.umn.edu> Date: Wed, 05 Feb 2014 08:42:50 -0600 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: aurfalien References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> In-Reply-To: <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=ALL_TRUSTED,TW_ZF autolearn=unavailable version=3.3.2 Subject: Re: practical maximum number of drives X-SA-Exim-Version: 4.2 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 14:43:00 -0000 On 2/4/2014 11:36 PM, aurfalien wrote: > Hi Graham, > > When you say behaved better with 1 HBA, what were the issues that > made you go that route? It worked fine in general with 3 HBAs for a while but OTOH 2 of the drive chassis were being very lightly used (and note I was being quite conservative and keeping each chassis as an independent zfs pool). Actual problems occurred once while I was away but our notes show we got some kind of repeated i/o deadlock. As well as all drive i/o stopping, we also couldn't use the sg_ses utilities to query the enclosures. This reoccurred several times after restarts throughout the day, and eventually "we" (again i wasn't here) removed the extra HBAs and daisy-chained all the chassis together. An inspired hunch, I guess. No issues since then. Coincidentally a few days later I saw a message on this list from Xin Li "Re: kern/177536: [zfs] zfs livelock (deadlock) with high write-to-disk load": One problem we found in field that is not easy to reproduce is that there is a lost interrupt issue in FreeBSD core. This was fixed in r253184 (post-9.1-RELEASE and before 9.2, the fix will be part of the upcoming FreeBSD 9.2-RELEASE): http://svnweb.freebsd.org/base/stable/9/sys/kern/kern_intr.c?r1=249402&r2=253184&view=patch The symptom of this issue is that you basically see a lot of processes blocking on zio->zio_cv, while there is no disk activity. However, the information you have provided can neither prove or deny my guess. I post the information here so people are aware of this issue if they search these terms. Something else suggested to me that multiple mps adapters would make this worse but I'm not quite sure what. This issue wouldn't exist after 9.1 anyway. > Also, curious that you have that many drives on 1 PCI card, is it PCI > 3 etc… and is saturation an issue? Pretty sure it's PCIe 2.x but we haven't seen any saturation issues. That was of course the motivation for using separate HBAs in the initial design but it was more of a hypothetical concern than a real one - at least given our use pattern at present. This is more backing storage, the more intensive i/o usually goes to a hadoop filesystem. Graham From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 14:46:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C4282712 for ; Wed, 5 Feb 2014 14:46:58 +0000 (UTC) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 915BB11C1 for ; Wed, 5 Feb 2014 14:46:58 +0000 (UTC) Received: from c-66-41-25-68.hsd1.mn.comcast.net ([66.41.25.68] helo=[192.168.0.138]) by mail.physics.umn.edu with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1WB3kY-0001eK-HF; Wed, 05 Feb 2014 08:46:55 -0600 Message-ID: <52F24ED1.1010300@physics.umn.edu> Date: Wed, 05 Feb 2014 08:46:41 -0600 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Rich , Daniel Kalchev References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> <52F1DEBC.9020304@digsys.bg> <52F1FBBA.1000909@digsys.bg> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=unavailable version=3.3.2 Subject: Re: practical maximum number of drives X-SA-Exim-Version: 4.2 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 14:46:58 -0000 Is this difference not just because the 8e has two SAS buses and the 4i only one? So they are claiming a consistent 256 devices per bus. We are using the 9200-8e, and our drives are split across both buses. Do you have >256 drives on a single HBA yourself? Graham On 2/5/2014 3:24 AM, Rich wrote: > http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9200-8e.aspx > > Claims 512. > > http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9211-4i.aspx > > claims 256. > > I'm really extraordinarily curious to see what would happen, but do > not have >512 unused drives I could attach to a single HBA to find > out...easily. > > I have >256, but since all the stuff I can find other than SM's > documentation claims 512 for the SAS2008, that would only refute their > statement, not LSI's. From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 18:42:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3897025D for ; Wed, 5 Feb 2014 18:42:44 +0000 (UTC) Received: from mail-pb0-x22a.google.com (mail-pb0-x22a.google.com [IPv6:2607:f8b0:400e:c01::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id EFDE51A46 for ; Wed, 5 Feb 2014 18:42:43 +0000 (UTC) Received: by mail-pb0-f42.google.com with SMTP id jt11so721828pbb.15 for ; Wed, 05 Feb 2014 10:42:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :message-id:references:to; bh=jEl0uVoyep1ObWtTcGjCFgrwjUVD+GdS5RJalLxadD8=; b=JkG/wjITSF+7kNbwaRoHVPJzaDjHb8xOqvjmkBRFrD5ZixDJd1BGOmECI6KWY8Tb4f bL8Ux/1AD6evXhX4WguPJuQ8EJpKxn8PSGhtf0+EcpoS6ofNKCt9scIunbNPRyW3r8OW v5uA5Ol7pYgeC5WRkRvS5wIc6cXLkjDO6rOP1e3vO0vyoKClVEzBou+456XBA6SW5V8t yUXlt0Vwwz2RA2RFefJxkDZZic5vcYRCcB9A5wv6LPDJ8ulZ2ZUbY9SVllRmKLJf1AMU VoLNpEopGEO+ajOj0OVZRgStt7sOj1s4+M4V2Ns6gneoGiDgmmltB1l+OIOhmFTzp1Jb oj5w== X-Received: by 10.69.0.39 with SMTP id av7mr4323797pbd.4.1391625762561; Wed, 05 Feb 2014 10:42:42 -0800 (PST) Received: from briankrusicw.logan.tv ([64.17.255.138]) by mx.google.com with ESMTPSA id ja8sm38456191pbd.3.2014.02.05.10.42.41 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 05 Feb 2014 10:42:41 -0800 (PST) Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\)) Subject: Re: practical maximum number of drives From: aurfalien In-Reply-To: <52F1FBBA.1000909@digsys.bg> Date: Wed, 5 Feb 2014 10:42:29 -0800 Message-Id: <16004B1D-B8A3-4238-9945-5DA98FCEA254@gmail.com> References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> <52F1DEBC.9020304@digsys.bg> <52F1FBBA.1000909@digsys.bg> To: Daniel Kalchev X-Mailer: Apple Mail (2.1827) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 18:42:44 -0000 Cool. But I was more curious about what lead you to using 1 HBA over using a = few more. You mentioned something about interrupts, what problems manifested as a = result of multi HBAs? - aurf On Feb 5, 2014, at 12:52 AM, Daniel Kalchev wrote: > Ok, two things. >=20 > First, it was a typo -- the number is 122 devices and I actually got = it from the likes of this FAQ entry: = http://www.supermicro.com/support/faqs/faq.cfm?faq=3D10004 > I never use these for anything other than HBA. >=20 > It is interesting to see that LSI claims 3000 devices. Might be, = firmware has changed? Or there are different variations of the = chip/implementation? >=20 > Daniel >=20 > On 05.02.14 10:08, Rich wrote: >> The SAS2008 has a limit of 112 drives? >>=20 >> = http://www.lsi.com/downloads/Public/SAS%20ICs/LSISAS2008/SCG_LSISAS2008_PB= _043009.pdf >> claims "up to 3000 devices." >>=20 >> SAS2008 is a PCIe gen 2 x8 chip. >>=20 >> I suspect the bottleneck order would go SAS expander then SAS2008 = then PCIe. >>=20 >> - Rich >>=20 >> On Wed, Feb 5, 2014 at 1:48 AM, Daniel Kalchev = wrote: >>> I also wonder how you managed to go over the LSI2008's limit of 112 >>> drives... >>>=20 >>>=20 >>> On 05.02.14 07:36, aurfalien wrote: >>>> Hi Graham, >>>>=20 >>>> When you say behaved better with 1 HBA, what were the issues that = made you >>>> go that route? >>>>=20 >>>> Also, curious that you have that many drives on 1 PCI card, is it = PCI 3 >>>> etc... and is saturation an issue? >>>>=20 >>>> - aurf >>>>=20 >>>> On Feb 4, 2014, at 8:27 PM, Graham Allan = wrote: >>>>=20 >>>>> This may well be a question with no real answer but since we're = speccing >>>>> out a new ZFS-based storage system, I've been asked what the = maximum number >>>>> of drives it can support would be (for a hypothetical expansion = option). >>>>> While there are some obvious limits such as SAS addressing, I = assume there >>>>> must be more fundamental ones in the kernel or drivers, and the = practical >>>>> limits will be very different from the hypothetical ones. >>>>>=20 >>>>> So far the largest system we've built is using three 45-drive = chassis on >>>>> one SAS2008 (mps) controller, so 135 drives total. Over many = months of >>>>> running we had several drives fail and be replaced, and eventually = the OS >>>>> (9.1) failed to assign new da devices. It was time to patch the = system and >>>>> reboot anyway, which solved it, but we did wonder if we were = running into >>>>> some kind of limit around 150 drives - though I don't see why. >>>>>=20 >>>>> Interestingly we initially built this system with each drive = chassis on >>>>> its own SAS2008 HBA, but it ultimately behaved better = daisy-chained with >>>>> only one. I think I saw a hint somewhere this could be to do with = interrupt >>>>> sharing... >>>>>=20 >>>>> Thanks for any insights, >>>>>=20 >>>>> Graham >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >>>=20 >>>=20 >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 18:42:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7A5B6260 for ; Wed, 5 Feb 2014 18:42:49 +0000 (UTC) Received: from mail-pb0-x22f.google.com (mail-pb0-x22f.google.com [IPv6:2607:f8b0:400e:c01::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 47C2F1A4A for ; Wed, 5 Feb 2014 18:42:49 +0000 (UTC) Received: by mail-pb0-f47.google.com with SMTP id rp16so719046pbb.20 for ; Wed, 05 Feb 2014 10:42:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :message-id:references:to; bh=EMsdV7eRaGHZcjDofPwjIIYAjGMzfY3xTResgRplUC8=; b=ReGReMUQv4EJHm3X76TJdSy3N8rQeBX0NRstZXJhz6TVa34ktmxZAN7Nq9r+mPPPoA 9HxO/pTIOb10+0jvBzmfQE78ZCKyUFPPx9H9J/BBBEccrLhHMpDN9CcViCMFrcabtLwP T2oBuXliztHMnhz77IlUor2+uIGqW/nzhyPgsDDxsY27J+h/ddrCRYoSG8y16+pW3VW5 KvsflWqOnurdINuza2YI7uOEwNbTvIq5MKsVHaChrZycfCRYle41nDjgz5JI0DBgVCXY s/nZtV411Z8NlZ1xMRjsLUbGRXzFMfthIfCNXHJrsqxYq+1J/02kjwuilScKrJzHXg2k w0oA== X-Received: by 10.68.203.135 with SMTP id kq7mr4141281pbc.85.1391625766837; Wed, 05 Feb 2014 10:42:46 -0800 (PST) Received: from briankrusicw.logan.tv ([64.17.255.138]) by mx.google.com with ESMTPSA id ja8sm38456191pbd.3.2014.02.05.10.42.45 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 05 Feb 2014 10:42:46 -0800 (PST) Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\)) Subject: Re: practical maximum number of drives From: aurfalien In-Reply-To: <52F1FBBA.1000909@digsys.bg> Date: Wed, 5 Feb 2014 10:42:45 -0800 Message-Id: <785DBF11-1550-4918-B346-4843D7E2AF0B@gmail.com> References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> <52F1DEBC.9020304@digsys.bg> <52F1FBBA.1000909@digsys.bg> To: Daniel Kalchev X-Mailer: Apple Mail (2.1827) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 18:42:49 -0000 Cool. But I was more curious about what lead you to using 1 HBA over using a = few more. You mentioned something about interrupts, what problems manifested as a = result of multi HBAs? - aurf On Feb 5, 2014, at 12:52 AM, Daniel Kalchev wrote: > Ok, two things. >=20 > First, it was a typo -- the number is 122 devices and I actually got = it from the likes of this FAQ entry: = http://www.supermicro.com/support/faqs/faq.cfm?faq=3D10004 > I never use these for anything other than HBA. >=20 > It is interesting to see that LSI claims 3000 devices. Might be, = firmware has changed? Or there are different variations of the = chip/implementation? >=20 > Daniel >=20 > On 05.02.14 10:08, Rich wrote: >> The SAS2008 has a limit of 112 drives? >>=20 >> = http://www.lsi.com/downloads/Public/SAS%20ICs/LSISAS2008/SCG_LSISAS2008_PB= _043009.pdf >> claims "up to 3000 devices." >>=20 >> SAS2008 is a PCIe gen 2 x8 chip. >>=20 >> I suspect the bottleneck order would go SAS expander then SAS2008 = then PCIe. >>=20 >> - Rich >>=20 >> On Wed, Feb 5, 2014 at 1:48 AM, Daniel Kalchev = wrote: >>> I also wonder how you managed to go over the LSI2008's limit of 112 >>> drives... >>>=20 >>>=20 >>> On 05.02.14 07:36, aurfalien wrote: >>>> Hi Graham, >>>>=20 >>>> When you say behaved better with 1 HBA, what were the issues that = made you >>>> go that route? >>>>=20 >>>> Also, curious that you have that many drives on 1 PCI card, is it = PCI 3 >>>> etc... and is saturation an issue? >>>>=20 >>>> - aurf >>>>=20 >>>> On Feb 4, 2014, at 8:27 PM, Graham Allan = wrote: >>>>=20 >>>>> This may well be a question with no real answer but since we're = speccing >>>>> out a new ZFS-based storage system, I've been asked what the = maximum number >>>>> of drives it can support would be (for a hypothetical expansion = option). >>>>> While there are some obvious limits such as SAS addressing, I = assume there >>>>> must be more fundamental ones in the kernel or drivers, and the = practical >>>>> limits will be very different from the hypothetical ones. >>>>>=20 >>>>> So far the largest system we've built is using three 45-drive = chassis on >>>>> one SAS2008 (mps) controller, so 135 drives total. Over many = months of >>>>> running we had several drives fail and be replaced, and eventually = the OS >>>>> (9.1) failed to assign new da devices. It was time to patch the = system and >>>>> reboot anyway, which solved it, but we did wonder if we were = running into >>>>> some kind of limit around 150 drives - though I don't see why. >>>>>=20 >>>>> Interestingly we initially built this system with each drive = chassis on >>>>> its own SAS2008 HBA, but it ultimately behaved better = daisy-chained with >>>>> only one. I think I saw a hint somewhere this could be to do with = interrupt >>>>> sharing... >>>>>=20 >>>>> Thanks for any insights, >>>>>=20 >>>>> Graham >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >>>=20 >>>=20 >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Feb 5 18:45:35 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4A4EF389 for ; Wed, 5 Feb 2014 18:45:35 +0000 (UTC) Received: from mail-pa0-x236.google.com (mail-pa0-x236.google.com [IPv6:2607:f8b0:400e:c03::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 1CBBB1A73 for ; Wed, 5 Feb 2014 18:45:35 +0000 (UTC) Received: by mail-pa0-f54.google.com with SMTP id fa1so691421pad.41 for ; Wed, 05 Feb 2014 10:45:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=Ab/ybE796nf7UDISi3Li1IsFF96b5FDjaVb40L8KU94=; b=xPNz+1EMzuf9xstPM5wzclbIwXrWKKCei8eb3J0YP1JtrtkIGd7/SXnr1ng2ddNwXj d+VYUU1PqX4BGVevhlvEZZB/zaGaS5LKvQxmBj6fEIuVrHW24pgWStdu5AFOievUUo0S JmGMDAymN0/wDn+EqBg1SZDqd3cvovxcWzYAYojEjxgUlIPar1Bz6I73RZo6vkonvq5E zjvLVMz5agUFo362aXKElrMBy2jrFAUmuBXmLGgPuhL+7jBk3Cxk7bn3ByaqainHMRM+ eOqp0uzTkTspvdY+jcY/HcQLy9kDHfZ9SgfFlxVi2dhZp/BYgWSwGyFYbZJWmXlT8RNN /WVw== X-Received: by 10.68.190.169 with SMTP id gr9mr4326336pbc.30.1391625934591; Wed, 05 Feb 2014 10:45:34 -0800 (PST) Received: from briankrusicw.logan.tv ([64.17.255.138]) by mx.google.com with ESMTPSA id ja8sm38472960pbd.3.2014.02.05.10.45.33 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 05 Feb 2014 10:45:34 -0800 (PST) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\)) Subject: Re: practical maximum number of drives From: aurfalien In-Reply-To: <52F24DEA.9090905@physics.umn.edu> Date: Wed, 5 Feb 2014 10:45:33 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <94A20D8E-292D-47B4-8D82-61A131B3010D@gmail.com> References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> <52F24DEA.9090905@physics.umn.edu> To: Graham Allan X-Mailer: Apple Mail (2.1827) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Feb 2014 18:45:35 -0000 Ah great info many thanks. And pplz, ignore my reply to Daniel as I got the posts confused. I = recently switched to Sanka :) - aurf On Feb 5, 2014, at 6:42 AM, Graham Allan wrote: >=20 >=20 > On 2/4/2014 11:36 PM, aurfalien wrote: >> Hi Graham, >>=20 >> When you say behaved better with 1 HBA, what were the issues that >> made you go that route? >=20 > It worked fine in general with 3 HBAs for a while but OTOH 2 of the = drive chassis were being very lightly used (and note I was being quite = conservative and keeping each chassis as an independent zfs pool). >=20 > Actual problems occurred once while I was away but our notes show we = got some kind of repeated i/o deadlock. As well as all drive i/o = stopping, we also couldn't use the sg_ses utilities to query the = enclosures. This reoccurred several times after restarts throughout the = day, and eventually "we" (again i wasn't here) removed the extra HBAs = and daisy-chained all the chassis together. An inspired hunch, I guess. = No issues since then. >=20 > Coincidentally a few days later I saw a message on this list from Xin = Li "Re: kern/177536: [zfs] zfs livelock (deadlock) with high = write-to-disk load": >=20 > One problem we found in field that is not easy to reproduce is that > there is a lost interrupt issue in FreeBSD core. This was fixed in > r253184 (post-9.1-RELEASE and before 9.2, the fix will be part of the > upcoming FreeBSD 9.2-RELEASE): >=20 > = http://svnweb.freebsd.org/base/stable/9/sys/kern/kern_intr.c?r1=3D249402&r= 2=3D253184&view=3Dpatch >=20 > The symptom of this issue is that you basically see a lot of processes > blocking on zio->zio_cv, while there is no disk activity. However, > the information you have provided can neither prove or deny my guess. > I post the information here so people are aware of this issue if they > search these terms. >=20 > Something else suggested to me that multiple mps adapters would make = this worse but I'm not quite sure what. This issue wouldn't exist after = 9.1 anyway. >=20 >> Also, curious that you have that many drives on 1 PCI card, is it PCI >> 3 etc=85 and is saturation an issue? >=20 > Pretty sure it's PCIe 2.x but we haven't seen any saturation issues. = That was of course the motivation for using separate HBAs in the initial = design but it was more of a hypothetical concern than a real one - at = least given our use pattern at present. This is more backing storage, = the more intensive i/o usually goes to a hadoop filesystem. >=20 > Graham From owner-freebsd-fs@FreeBSD.ORG Thu Feb 6 08:18:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4BA58B6 for ; Thu, 6 Feb 2014 08:18:00 +0000 (UTC) Received: from mail-vb0-x231.google.com (mail-vb0-x231.google.com [IPv6:2607:f8b0:400c:c02::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id F31C91347 for ; Thu, 6 Feb 2014 08:17:59 +0000 (UTC) Received: by mail-vb0-f49.google.com with SMTP id x14so1157035vbb.22 for ; Thu, 06 Feb 2014 00:17:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=eCfzB1FVfF0fJeDYQOLMc4ApK+il4vzM53+/d1PzZPQ=; b=HLin5z3RndtNNibD/NpYODu7BpkuFnILWNksOUMfMYcHq4B6AkXN9v52cImHA6rJpq ujV9wottq2b+z22YnoOlDqIgEXA4SGA73fUrhDk7Xf1EPWARZso6y1PzeRpJhvtAVZxp mhe3wDOyZ3j3FRju8MOgkPbkw/NKDPewkATwSkFUNQMK98TZvQPUPp2JwIAyBJv9d6tr +OV2VWix6Mv9CM+eqR24AYjaRuaPedQqj7OxrdwklMdRJLFcwkKCrLaGnkpdJ2OcCqhR uV0rmmx/YLYXrYXEGseWV/OUpQJAgF8m8tUnXTQCWvHs37l1kCTRsHHke457iZglGKi4 0dyQ== X-Received: by 10.52.95.233 with SMTP id dn9mr3985021vdb.3.1391674679130; Thu, 06 Feb 2014 00:17:59 -0800 (PST) MIME-Version: 1.0 Received: by 10.59.0.68 with HTTP; Thu, 6 Feb 2014 00:17:29 -0800 (PST) In-Reply-To: References: From: Matthias Gamsjager Date: Thu, 6 Feb 2014 09:17:29 +0100 Message-ID: Subject: Re: ZFS and Wired memory, again To: Anton Sayetsky Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2014 08:18:00 -0000 What are your vfs.zfs.zio.use_uma settings? Settings this back to 0 keeps the wired memory inline with the arc. On Wed, Jan 29, 2014 at 10:30 AM, Matthias Gamsjager wrote: > Found it. in the Freebsd Current list with subject ARC "pressured out", > how to control/stabilize > looks kinda alike > > > On Wed, Jan 29, 2014 at 10:28 AM, Matthias Gamsjager > wrote: > >> I remember reading something similar couple of days ago but can't find >> the thread. >> >> >> On Tue, Jan 28, 2014 at 7:50 PM, Anton Sayetsky wrote: >> >>> 2013-11-22 Anton Sayetsky : >>> > Hello, >>> > >>> > I'm planning to deploy a ~150 TiB ZFS pool and when playing with ZFS >>> > noticed that amount of wired memory is MUCH bigger than ARC size (in >>> > absence of other hungry memory consumers, of course). I'm afraid that >>> > this strange behavior may become even worse on a machine with big pool >>> > and some hundreds gibibytes of RAM. >>> > >>> > So let me explain what happened. >>> > >>> > Immediately after booting system top says the following: >>> > ===== >>> > Mem: 14M Active, 13M Inact, 117M Wired, 2947M Free >>> > ARC: 24M Total, 5360K MFU, 18M MRU, 16K Anon, 328K Header, 1096K Other >>> > ===== >>> > Ok, wired mem - arc = 92 MiB >>> > >>> > Then I started to read pool (tar cpf /dev/null /). >>> > Memory usage when ARC size is ~1GiB >>> > ===== >>> > Mem: 16M Active, 15M Inact, 1410M Wired, 1649M Free >>> > ARC: 1114M Total, 29M MFU, 972M MRU, 21K Anon, 18M Header, 95M Other >>> > ===== >>> > 1410-1114=296 MiB >>> > >>> > Memory usage when ARC size reaches it's maximum of 2 GiB >>> > ===== >>> > Mem: 16M Active, 16M Inact, 2523M Wired, 536M Free >>> > ARC: 2067M Total, 3255K MFU, 1821M MRU, 35K Anon, 38M Header, 204M >>> Other >>> > ===== >>> > 2523-2067=456 MiB >>> > >>> > Memory usage a few minutes later >>> > ===== >>> > Mem: 10M Active, 27M Inact, 2721M Wired, 333M Free >>> > ARC: 2002M Total, 22M MFU, 1655M MRU, 21K Anon, 36M Header, 289M Other >>> > ===== >>> > 2721-2002=719 MiB >>> > >>> > So why the wired ram on a machine with only minimal amount of services >>> > has grown from 92 to 719 MiB? Sometimes I can even see about a gig! >>> > I'm using 9.2-RELEASE-p1 amd64. Test machine has a T5450 C2D CPU and 4 >>> > G RAM (actual available amount is 3 G). ZFS pool is configured on a >>> > GPT partition of a single 1 TB HDD. >>> > Disabling/enabling prefetch does't helps. Limiting ARC to 1 gig >>> doesn't helps. >>> > When reading a pool, evict skips can increment very fast and sometimes >>> > arc metadata exceeds limit (2x-5x). >>> > >>> > I've attached logs with system configuration, outputs from top, ps, >>> > zfs-stats and vmstat. >>> > conf.log = system configuration, also uploaded to >>> http://pastebin.com/NYBcJPeT >>> > top_ps_zfs-stats_vmstat_afterboot = memory stats immediately after >>> > booting system, http://pastebin.com/mudmEyG5 >>> > top_ps_zfs-stats_vmstat_1g-arc = after ARC grown to 1 gig, >>> > http://pastebin.com/4AC8dn5C >>> > top_ps_zfs-stats_vmstat_fullmem = when ARC reached limit of 2 gigs, >>> > http://pastebin.com/bx7svEP0 >>> > top_ps_zfs-stats_vmstat_fullmem_2 = few minutes later, >>> > http://pastebin.com/qYWFaNeA >>> > >>> > What should I do next? >>> BUMP >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >> >> > From owner-freebsd-fs@FreeBSD.ORG Thu Feb 6 11:13:35 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 50AFB96E for ; Thu, 6 Feb 2014 11:13:35 +0000 (UTC) Received: from mail-ve0-x22f.google.com (mail-ve0-x22f.google.com [IPv6:2607:f8b0:400c:c01::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 0A33C16AE for ; Thu, 6 Feb 2014 11:13:34 +0000 (UTC) Received: by mail-ve0-f175.google.com with SMTP id c14so1348882vea.34 for ; Thu, 06 Feb 2014 03:13:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=Brseh5MUsQd+PmXnNUN+aa5pg+732C2SucFA2VfG6oI=; b=sc9/XbtFEU/aI9mWs6W77WWsZzjEc+Si7YwKQRYSeyJ/jvBSLse2VjFDgNS5+XbNxT DfJzcydZGj3sq6oF3kjf++vbQWe6obtkOGFf3OFXSCKUuTB/wN85ht/Jt8UZuOCftRhK btIOBNJ+XQYUqYcNsxeBE2rdgKEtcYe9+5JlQKjHIEiY+ac7EaRdEGEA+1pqu4IxtVbd kkgf42wFWgsbf+31rPaOGJU5KudQNjG2qDKJWe/RMp66FhMjuU30iY9E5hGbxA/a78sI YaDvlw8k5EV/8x+KcBJ749NM9b0v1hfVQXQMrJQZsR2yKfH6zl07kN14+fVPpbAeYlyN R74w== X-Received: by 10.58.132.203 with SMTP id ow11mr5439483veb.1.1391685214117; Thu, 06 Feb 2014 03:13:34 -0800 (PST) MIME-Version: 1.0 Received: by 10.58.162.169 with HTTP; Thu, 6 Feb 2014 03:13:14 -0800 (PST) In-Reply-To: References: From: Anton Sayetsky Date: Thu, 6 Feb 2014 13:13:14 +0200 Message-ID: Subject: Re: ZFS and Wired memory, again To: Matthias Gamsjager Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2014 11:13:35 -0000 I've attached all important configs to the first message in this thread. ;) jason@jnb:~$ sysctl vfs.zfs.zio vfs.zfs.zio.use_uma: 0 jason@jnb:~$ From owner-freebsd-fs@FreeBSD.ORG Thu Feb 6 13:20:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B337D465 for ; Thu, 6 Feb 2014 13:20:34 +0000 (UTC) Received: from mail-wi0-x22f.google.com (mail-wi0-x22f.google.com [IPv6:2a00:1450:400c:c05::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 4E0381464 for ; Thu, 6 Feb 2014 13:20:34 +0000 (UTC) Received: by mail-wi0-f175.google.com with SMTP id hm4so1592487wib.8 for ; Thu, 06 Feb 2014 05:20:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=piRZ6Q7Dae9liUbtZFOoERV62dzT/zWIEUvBFvKgpbk=; b=VyNHdTlXaa/FWIwTEqA12R2g26On09R3SUCPMq09MQ+eArXkk7SiYMOHrkeqM7/7hJ hG+AzBUT0NS8j+7jTuRTCP5/eg53XoqbktllUTKhxoVBfMEKs7eJxw24UfVyozwXcSSV 4RfVpotzswkwyujCFECEG/Ep5KZMlQsN772wnUjtqSB3lp6zTSzCrKnCk12nhaBKIkVD GjS9+qjudJ2BT/IToOW/ZxoIGpg6g3yGGvIswNmfajyh9Phorf6OhjXd5lLKTY1irtZN TxxGZO8diLJ/Tn7bEVtTrZyhuw3DDMlhNLy0xnF3N/hA+ThxBfFMP0mqT2QC3Oq3FmsO jlPA== MIME-Version: 1.0 X-Received: by 10.194.119.168 with SMTP id kv8mr1946501wjb.41.1391692832604; Thu, 06 Feb 2014 05:20:32 -0800 (PST) Received: by 10.194.60.17 with HTTP; Thu, 6 Feb 2014 05:20:32 -0800 (PST) Date: Thu, 6 Feb 2014 08:20:32 -0500 Message-ID: Subject: Recovering deleted file, strange structure From: Felipe Monteiro de Carvalho To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2014 13:20:34 -0000 Hello, I am implementing a software to recover deleted files in UFS-1/2. Right now I am first focusing in UFS-2, so I created a partition, added some files, deleted a file, and then added more files. The name of the file (10MB_88.bin) completely vanished from the disk image, and it's inode and dir entry were also overwritten. But I found this strange place in the disk where I can clearly see references to the first and following block fragments of the disk ($B0 12 00 00 00 00 00 00), see this screenshot here: http://imageshack.com/a/img546/3399/o1lz.png But what kind of section/structure is this? I am reading the source code of FreeBSD UFS driver, and I attempted to compare to the structs there, but nothing seams to match ... each $20 bytes we have a new record with a reference to a block fragment. I tried to compare to the ufs_cylinder_group but it doesn't match ... so any ideas which struct / place in the source code is utilized to create this structure? thank you very much =) -- Felipe Monteiro de Carvalho From owner-freebsd-fs@FreeBSD.ORG Thu Feb 6 14:37:08 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E2E1C797 for ; Thu, 6 Feb 2014 14:37:06 +0000 (UTC) Received: from pi.nmdps.net (pi.nmdps.net [109.61.102.5]) by mx1.freebsd.org (Postfix) with ESMTP id 90EE81BFA for ; Thu, 6 Feb 2014 14:37:04 +0000 (UTC) Received: from pi.nmdps.net (pi.nmdps.net [109.61.102.5]) (Authenticated sender: krichy@cflinux.hu) by pi.nmdps.net (Postfix) with ESMTPSA id 1B1F8112D for ; Thu, 6 Feb 2014 15:36:57 +0100 (CET) Date: Thu, 6 Feb 2014 15:36:54 +0100 (CET) From: Richard Kojedzinszky X-X-Sender: krichy@pi.nmdps.net To: freebsd-fs@freebsd.org Subject: geom write cache handling Message-ID: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="2628712688-239966612-1391696912=:61272" Content-ID: X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2014 14:37:08 -0000 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --2628712688-239966612-1391696912=:61272 Content-Type: TEXT/PLAIN; CHARSET=US-ASCII; format=flowed Content-ID: Dear fs team, I own an STEC SSD, which I use for ZFS SLOG. The device is broken in that if it gets a SCSI synchronize cache, it does something which is very slow. Actually with it in FreeBSD the reachable sync IOPS is around 100. When the device has write cache disabled, there is no need to send the SCSI synchronize cache commands to it, and without them I can reach 1400 IOPS. The device itself have power-loss-protection, so I risk no data corruption at all. By the way, linux behave the same, if either ata or scsi disks have their write cache turned off, it even does not send the command to the drive. Also, I have an Intel S3700, which is much faster, but have similar symptons. The drive handles synchronize cache commands much faster than STEC, maybe it is implemented in the drive as a simple NOP, but as for 4K sync writes a sync cache follows, it effectively halves the IOPS. I have it attached to a SATA2 controller only, and with WCE disabled, and sending the sync cache to it I can reach around 4500 IOPS, while disabling the sync cache it can reach >9000 IOPS. I've attached a very simple patch for the ata layer, but I dont know how to implement it for the scsi subsystem also. Regards, Kojedzinszky Richard --2628712688-239966612-1391696912=:61272 Content-Type: TEXT/PLAIN; CHARSET=US-ASCII; NAME=geom_write_through.diff Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: ATTACHMENT; FILENAME=geom_write_through.diff ZGlmZiAtLWdpdCBhL3N5cy9jYW0vYXRhL2F0YV9kYS5jIGIvc3lzL2NhbS9h dGEvYXRhX2RhLmMNCmluZGV4IGNjMjgzMTEuLjEwZTRmOWIgMTAwNjQ0DQot LS0gYS9zeXMvY2FtL2F0YS9hdGFfZGEuYw0KKysrIGIvc3lzL2NhbS9hdGEv YXRhX2RhLmMNCkBAIC0xMjQyLDcgKzEyNDIsNyBAQCBhZGFyZWdpc3Rlcihz dHJ1Y3QgY2FtX3BlcmlwaCAqcGVyaXBoLCB2b2lkICphcmcpDQogCQltYXhp byA9IG1pbihtYXhpbywgMjU2ICogc29mdGMtPnBhcmFtcy5zZWNzaXplKTsN CiAJc29mdGMtPmRpc2stPmRfbWF4c2l6ZSA9IG1heGlvOw0KIAlzb2Z0Yy0+ ZGlzay0+ZF91bml0ID0gcGVyaXBoLT51bml0X251bWJlcjsNCi0Jc29mdGMt PmRpc2stPmRfZmxhZ3MgPSAwOw0KKwlzb2Z0Yy0+ZGlzay0+ZF9mbGFncyA9 IERJU0tGTEFHX1dSSVRFX1RIUk9VR0g7DQogCWlmIChzb2Z0Yy0+ZmxhZ3Mg JiBBREFfRkxBR19DQU5fRkxVU0hDQUNIRSkNCiAJCXNvZnRjLT5kaXNrLT5k X2ZsYWdzIHw9IERJU0tGTEFHX0NBTkZMVVNIQ0FDSEU7DQogCWlmIChzb2Z0 Yy0+ZmxhZ3MgJiBBREFfRkxBR19DQU5fVFJJTSkgew0KQEAgLTE4MzUsNiAr MTgzNSwxMiBAQCBhZGFkb25lKHN0cnVjdCBjYW1fcGVyaXBoICpwZXJpcGgs IHVuaW9uIGNjYiAqZG9uZV9jY2IpDQogCQkJfQ0KIAkJfQ0KIA0KKwkJaWYg KGF0YWlvLT5jbWQuZmVhdHVyZXMgPT0gQVRBX1NGX0VOQUJfV0NBQ0hFKSB7 DQorCQkJc29mdGMtPmRpc2stPmRfZmxhZ3MgJj0gfkRJU0tGTEFHX1dSSVRF X1RIUk9VR0g7DQorCQl9IGVsc2Ugew0KKwkJCXNvZnRjLT5kaXNrLT5kX2Zs YWdzIHw9IERJU0tGTEFHX1dSSVRFX1RIUk9VR0g7DQorCQl9DQorDQogCQlz b2Z0Yy0+c3RhdGUgPSBBREFfU1RBVEVfTk9STUFMOw0KIAkJLyoNCiAJCSAq IFNpbmNlIG91ciBwZXJpcGhlcmFsIG1heSBiZSBpbnZhbGlkYXRlZCBieSBh biBlcnJvcg0KZGlmZiAtLWdpdCBhL3N5cy9nZW9tL2dlb21fZGlzay5jIGIv c3lzL2dlb20vZ2VvbV9kaXNrLmMNCmluZGV4IDE2ZjZjNDQuLjJjYjRjZWYg MTAwNjQ0DQotLS0gYS9zeXMvZ2VvbS9nZW9tX2Rpc2suYw0KKysrIGIvc3lz L2dlb20vZ2VvbV9kaXNrLmMNCkBAIC00MDQsNiArNDA0LDEwIEBAIGdfZGlz a19zdGFydChzdHJ1Y3QgYmlvICpicCkNCiAJY2FzZSBCSU9fRkxVU0g6DQog CQlnX3RyYWNlKEdfVF9CSU8sICJnX2Rpc2tfZmx1c2hjYWNoZSglcykiLA0K IAkJICAgIGJwLT5iaW9fdG8tPm5hbWUpOw0KKwkJaWYgKGRwLT5kX2ZsYWdz ICYgRElTS0ZMQUdfV1JJVEVfVEhST1VHSCkgew0KKwkJCWVycm9yID0gMDsN CisJCQlicmVhazsNCisJCX0NCiAJCWlmICghKGRwLT5kX2ZsYWdzICYgRElT S0ZMQUdfQ0FORkxVU0hDQUNIRSkpIHsNCiAJCQllcnJvciA9IEVPUE5PVFNV UFA7DQogCQkJYnJlYWs7DQpkaWZmIC0tZ2l0IGEvc3lzL2dlb20vZ2VvbV9k aXNrLmggYi9zeXMvZ2VvbS9nZW9tX2Rpc2suaA0KaW5kZXggNWUwODFjOC4u YTUzYWEzOCAxMDA2NDQNCi0tLSBhL3N5cy9nZW9tL2dlb21fZGlzay5oDQor KysgYi9zeXMvZ2VvbS9nZW9tX2Rpc2suaA0KQEAgLTExMSw2ICsxMTEsNyBA QCBzdHJ1Y3QgZGlzayB7DQogI2RlZmluZSBESVNLRkxBR19MQUNLU19HT05F CTB4MTANCiAjZGVmaW5lIERJU0tGTEFHX1VOTUFQUEVEX0JJTwkweDIwDQog I2RlZmluZSBESVNLRkxBR19MQUNLU19ERUxNQVgJMHg0MA0KKyNkZWZpbmUg RElTS0ZMQUdfV1JJVEVfVEhST1VHSAkweDgwDQogDQogc3RydWN0IGRpc2sg KmRpc2tfYWxsb2Modm9pZCk7DQogdm9pZCBkaXNrX2NyZWF0ZShzdHJ1Y3Qg ZGlzayAqZGlzaywgaW50IHZlcnNpb24pOw0K --2628712688-239966612-1391696912=:61272-- From owner-freebsd-fs@FreeBSD.ORG Thu Feb 6 14:45:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2D4F1AC6 for ; Thu, 6 Feb 2014 14:45:49 +0000 (UTC) Received: from krichy.tvnetwork.hu (unknown [IPv6:2a01:be00:0:2::10]) by mx1.freebsd.org (Postfix) with ESMTP id A777C1CE5 for ; Thu, 6 Feb 2014 14:45:48 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id 3A3054161; Thu, 6 Feb 2014 15:44:28 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id 316B34160; Thu, 6 Feb 2014 15:44:28 +0100 (CET) Date: Thu, 6 Feb 2014 15:44:28 +0100 (CET) From: krichy@tvnetwork.hu To: Richard Kojedzinszky Subject: Re: geom write cache handling In-Reply-To: Message-ID: References: User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="1030603365-1547680064-1391697868=:18231" Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2014 14:45:49 -0000 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --1030603365-1547680064-1391697868=:18231 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Fortunately I have my drives attached to a SAS controller, so I could add quirks for them. They are also attached. Regards, Kojedzinszky Richard Euronet Magyarorszag Informatika Zrt. On Thu, 6 Feb 2014, Richard Kojedzinszky wrote: > Date: Thu, 6 Feb 2014 15:36:54 +0100 (CET) > From: Richard Kojedzinszky > To: freebsd-fs@freebsd.org > Subject: geom write cache handling > > Dear fs team, > > I own an STEC SSD, which I use for ZFS SLOG. The device is broken in that if > it gets a SCSI synchronize cache, it does something which is very slow. > Actually with it in FreeBSD the reachable sync IOPS is around 100. When the > device has write cache disabled, there is no need to send the SCSI > synchronize cache commands to it, and without them I can reach 1400 IOPS. The > device itself have power-loss-protection, so I risk no data corruption at > all. > > By the way, linux behave the same, if either ata or scsi disks have their > write cache turned off, it even does not send the command to the drive. > > Also, I have an Intel S3700, which is much faster, but have similar symptons. > The drive handles synchronize cache commands much faster than STEC, maybe it > is implemented in the drive as a simple NOP, but as for 4K sync writes a sync > cache follows, it effectively halves the IOPS. I have it attached to a SATA2 > controller only, and with WCE disabled, and sending the sync cache to it I > can reach around 4500 IOPS, while disabling the sync cache it can reach >9000 > IOPS. > > I've attached a very simple patch for the ata layer, but I dont know how to > implement it for the scsi subsystem also. > > Regards, > > Kojedzinszky Richard --1030603365-1547680064-1391697868=:18231 Content-Type: TEXT/x-diff; name=s3700.patch Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: attachment; filename=s3700.patch Y29tbWl0IDA4NDFhYzFjZjM1Mzk3MTgxZGI5OGRmOTU0MDE1NDA5NzM1ZjFm YzQNCkF1dGhvcjogQ2hhcmxpZSBSb290IDxyb290QHBpLm5tZHBzLm5ldD4N CkRhdGU6ICAgVGh1IEZlYiA2IDE0OjQyOjIwIDIwMTQgKzAxMDANCg0KICAg IEludGVsIFMzNzAwIFNTRCBxdWlya3MNCg0KZGlmZiAtLWdpdCBhL3N5cy9j YW0vc2NzaS9zY3NpX2RhLmMgYi9zeXMvY2FtL3Njc2kvc2NzaV9kYS5jDQpp bmRleCBjYjVkNDFmLi45M2JiMjMxIDEwMDY0NA0KLS0tIGEvc3lzL2NhbS9z Y3NpL3Njc2lfZGEuYw0KKysrIGIvc3lzL2NhbS9zY3NpL3Njc2lfZGEuYw0K QEAgLTk4NCw2ICs5ODQsMTUgQEAgc3RhdGljIHN0cnVjdCBkYV9xdWlya19l bnRyeSBkYV9xdWlya190YWJsZVtdID0NCiAJfSwNCiAJew0KIAkJLyoNCisJ CSAqIEludGVsIFMzNzAwIFNlcmllcyBTU0RzDQorCQkgKiA0ayBvcHRpbWlz ZWQgJiB0cmltIG9ubHkgd29ya3MgaW4gNGsgcmVxdWVzdHMgKyA0ayBhbGln bmVkDQorCQkgKiBjYWNoZSBmbHVzaCBub3QgbmVlZGVkLCBhcyBwb3dlci1s b3NzLXByb3RlY3RlZA0KKwkJICovDQorCQl7IFRfRElSRUNULCBTSVBfTUVE SUFfRklYRUQsICJBVEEiLCAiSU5URUwgU1NEU0MyQkEqIiwgIioiIH0sDQor CQkvKnF1aXJrcyovREFfUV80SyB8IERBX1FfTk9fU1lOQ19DQUNIRQ0KKwl9 LA0KKwl7DQorCQkvKg0KIAkJICogS2luZ3N0b24gRTEwMCBTZXJpZXMgU1NE cw0KIAkJICogNGsgb3B0aW1pc2VkICYgdHJpbSBvbmx5IHdvcmtzIGluIDRr IHJlcXVlc3RzICsgNGsgYWxpZ25lZA0KIAkJICovDQo= --1030603365-1547680064-1391697868=:18231 Content-Type: TEXT/x-diff; name=stec.patch Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: attachment; filename=stec.patch Y29tbWl0IDcxNmNjZjBiMDMwNTA0ZTM5YWM2MGMyNGIxYWFlZDkxOTliZTEx M2YNCkF1dGhvcjogQ2hhcmxpZSBSb290IDxyb290QHBpLm5tZHBzLm5ldD4N CkRhdGU6ICAgV2VkIEphbiAxNSAyMTo1MDo0NCAyMDE0ICswMTAwDQoNCiAg ICBBZGRlZCBubyBzeW5jIGNhY2hlIHF1aXJrIGZvciBTVEVDIE1BQ0gxNiBk cml2ZXMNCg0KZGlmZiAtLWdpdCBhL3N5cy9jYW0vc2NzaS9zY3NpX2RhLmMg Yi9zeXMvY2FtL3Njc2kvc2NzaV9kYS5jDQppbmRleCA0YTk2OTgxLi5jYjVk NDFmIDEwMDY0NA0KLS0tIGEvc3lzL2NhbS9zY3NpL3Njc2lfZGEuYw0KKysr IGIvc3lzL2NhbS9zY3NpL3Njc2lfZGEuYw0KQEAgLTEwNjIsNiArMTA2Miwx NCBAQCBzdGF0aWMgc3RydWN0IGRhX3F1aXJrX2VudHJ5IGRhX3F1aXJrX3Rh YmxlW10gPQ0KIAkJeyBUX0RJUkVDVCwgU0lQX01FRElBX0ZJWEVELCAiQVRB IiwgIlNHOVhDUzJEKiIsICIqIiB9LA0KIAkJLypxdWlya3MqL0RBX1FfNEsN CiAJfSwNCisJew0KKwkJLyoNCisJCSAqIFNURUMgTUFDSDE2IFNBVEEgU1NE cw0KKwkJICogTm8gY2FjaGUgc3luYw0KKwkJICovDQorCQl7IFRfRElSRUNU LCBTSVBfTUVESUFfRklYRUQsICJBVEEiLCAiU1RFQyAgICBNQUNIMTYqIiwg IioiIH0sDQorCQkvKnF1aXJrcyovREFfUV9OT19TWU5DX0NBQ0hFDQorCX0s DQogfTsNCiANCiBzdGF0aWMJZGlza19zdHJhdGVneV90CWRhc3RyYXRlZ3k7 DQo= --1030603365-1547680064-1391697868=:18231-- From owner-freebsd-fs@FreeBSD.ORG Thu Feb 6 16:54:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3424B89C for ; Thu, 6 Feb 2014 16:54:48 +0000 (UTC) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id C8DAC1B48 for ; Thu, 6 Feb 2014 16:54:46 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id 1D70F45218E; Thu, 6 Feb 2014 17:54:38 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.3 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [10.255.0.2] (unknown [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id 82C1245218C for ; Thu, 6 Feb 2014 17:54:37 +0100 (CET) Message-ID: <52F3BE38.6050103@platinum.linux.pl> Date: Thu, 06 Feb 2014 17:54:16 +0100 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS and Wired memory, again References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2014 16:54:48 -0000 So what is exactly the problem here? Free memory is essentially wasted memory and there is still plenty of free memory available so there is no point to compact wired memory. Once free memory drops below vm.v_free_target you should see something happen. On 2014-01-28 19:50, Anton Sayetsky wrote: > 2013-11-22 Anton Sayetsky : >> Hello, >> >> I'm planning to deploy a ~150 TiB ZFS pool and when playing with ZFS >> noticed that amount of wired memory is MUCH bigger than ARC size (in >> absence of other hungry memory consumers, of course). I'm afraid that >> this strange behavior may become even worse on a machine with big pool >> and some hundreds gibibytes of RAM. >> >> So let me explain what happened. >> >> Immediately after booting system top says the following: >> ===== >> Mem: 14M Active, 13M Inact, 117M Wired, 2947M Free >> ARC: 24M Total, 5360K MFU, 18M MRU, 16K Anon, 328K Header, 1096K Other >> ===== >> Ok, wired mem - arc = 92 MiB >> >> Then I started to read pool (tar cpf /dev/null /). >> Memory usage when ARC size is ~1GiB >> ===== >> Mem: 16M Active, 15M Inact, 1410M Wired, 1649M Free >> ARC: 1114M Total, 29M MFU, 972M MRU, 21K Anon, 18M Header, 95M Other >> ===== >> 1410-1114=296 MiB >> >> Memory usage when ARC size reaches it's maximum of 2 GiB >> ===== >> Mem: 16M Active, 16M Inact, 2523M Wired, 536M Free >> ARC: 2067M Total, 3255K MFU, 1821M MRU, 35K Anon, 38M Header, 204M Other >> ===== >> 2523-2067=456 MiB >> >> Memory usage a few minutes later >> ===== >> Mem: 10M Active, 27M Inact, 2721M Wired, 333M Free >> ARC: 2002M Total, 22M MFU, 1655M MRU, 21K Anon, 36M Header, 289M Other >> ===== >> 2721-2002=719 MiB >> >> So why the wired ram on a machine with only minimal amount of services >> has grown from 92 to 719 MiB? Sometimes I can even see about a gig! >> I'm using 9.2-RELEASE-p1 amd64. Test machine has a T5450 C2D CPU and 4 >> G RAM (actual available amount is 3 G). ZFS pool is configured on a >> GPT partition of a single 1 TB HDD. >> Disabling/enabling prefetch does't helps. Limiting ARC to 1 gig doesn't helps. >> When reading a pool, evict skips can increment very fast and sometimes >> arc metadata exceeds limit (2x-5x). >> >> I've attached logs with system configuration, outputs from top, ps, >> zfs-stats and vmstat. >> conf.log = system configuration, also uploaded to http://pastebin.com/NYBcJPeT >> top_ps_zfs-stats_vmstat_afterboot = memory stats immediately after >> booting system, http://pastebin.com/mudmEyG5 >> top_ps_zfs-stats_vmstat_1g-arc = after ARC grown to 1 gig, >> http://pastebin.com/4AC8dn5C >> top_ps_zfs-stats_vmstat_fullmem = when ARC reached limit of 2 gigs, >> http://pastebin.com/bx7svEP0 >> top_ps_zfs-stats_vmstat_fullmem_2 = few minutes later, >> http://pastebin.com/qYWFaNeA >> >> What should I do next? > BUMP > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Feb 6 17:24:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EEB2B142 for ; Thu, 6 Feb 2014 17:24:38 +0000 (UTC) Received: from mail-vb0-x22e.google.com (mail-vb0-x22e.google.com [IPv6:2607:f8b0:400c:c02::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A9FB21F56 for ; Thu, 6 Feb 2014 17:24:38 +0000 (UTC) Received: by mail-vb0-f46.google.com with SMTP id o19so1670613vbm.33 for ; Thu, 06 Feb 2014 09:24:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=Rm0hxFXiX/wbLsZym04UPpct/zuOhXcIewAdX4vKa4k=; b=jncqATT2LvnS1hnY4iDy7ziDu1vFVff4TpEaX3kcxvBPpCv6oTfCYA5iC5uPDlsT1+ 3zgsbRQcrJhs6FtgGSu6G8u/naaUfVt+FgSKCsI38Concarybz+WOlzrcy/s1FuvfGlY DgWYa1+61OSnfz3vGR31HQEEY59cmTWtxSkZF6E1O2s6WYXgQVAYMddiTLO4WX8glevi wAaLYLwixhTruUCQQmtOgIbhHDfe31/Lw2e/cnWTPVgBQ1zqyBCWIFJ6RKkiDiiSDvk3 GWGILCP99NNsM+5MclAg9V0R0dmAD9AdZGU3ZcFBLizjmKOqWFfXOEc9GkWc7wNwDVTn iosw== X-Received: by 10.58.132.203 with SMTP id ow11mr6756650veb.1.1391707477817; Thu, 06 Feb 2014 09:24:37 -0800 (PST) MIME-Version: 1.0 Received: by 10.58.162.169 with HTTP; Thu, 6 Feb 2014 09:24:17 -0800 (PST) In-Reply-To: <52F3BE38.6050103@platinum.linux.pl> References: <52F3BE38.6050103@platinum.linux.pl> From: Anton Sayetsky Date: Thu, 6 Feb 2014 19:24:17 +0200 Message-ID: Subject: Re: ZFS and Wired memory, again To: Adam Nowacki Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2014 17:24:39 -0000 2014-02-06 Adam Nowacki : > So what is exactly the problem here? Free memory is essentially wasted > memory and there is still plenty of free memory available so there is no > point to compact wired memory. Once free memory drops below > vm.v_free_target you should see something happen. I'll quote my first message: >>> I'm planning to deploy a ~150 TiB ZFS pool and when playing with ZFS >>> noticed that amount of wired memory is MUCH bigger than ARC size (in >>> absence of other hungry memory consumers, of course). ... >>> So why the wired ram on a machine with only minimal amount of services >>> has grown from 92 to 719 MiB? Sometimes I can even see about a gig! >>> I'm using 9.2-RELEASE-p1 amd64. ... >>> When reading a pool, evict skips can increment very fast and sometimes >>> arc metadata exceeds limit (2x-5x). From owner-freebsd-fs@FreeBSD.ORG Thu Feb 6 17:56:05 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 97CFEC9E; Thu, 6 Feb 2014 17:56:05 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 689DA125E; Thu, 6 Feb 2014 17:56:05 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id s16Hu5Mo027711; Thu, 6 Feb 2014 17:56:05 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s16Hu5P6027710; Thu, 6 Feb 2014 17:56:05 GMT (envelope-from linimon) Date: Thu, 6 Feb 2014 17:56:05 GMT Message-Id: <201402061756.s16Hu5P6027710@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-amd64@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/186515: [gptboot] Doesn't boot with GPT when # of entries over than 128. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2014 17:56:05 -0000 Old Synopsis: Doesn't boot with GPT when # of entries over than 128. New Synopsis: [gptboot] Doesn't boot with GPT when # of entries over than 128. Responsible-Changed-From-To: freebsd-amd64->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Thu Feb 6 17:54:43 UTC 2014 Responsible-Changed-Why: reclassify. http://www.freebsd.org/cgi/query-pr.cgi?pr=186515 From owner-freebsd-fs@FreeBSD.ORG Thu Feb 6 19:30:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B7F2EC21 for ; Thu, 6 Feb 2014 19:30:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 98D561BA2 for ; Thu, 6 Feb 2014 19:30:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id s16JU22C052496 for ; Thu, 6 Feb 2014 19:30:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s16JU2Pi052495; Thu, 6 Feb 2014 19:30:02 GMT (envelope-from gnats) Date: Thu, 6 Feb 2014 19:30:02 GMT Message-Id: <201402061930.s16JU2Pi052495@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: John Baldwin Subject: Re: amd64/186515: Doesn't boot with GPT when # of entries over than 128. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: John Baldwin List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2014 19:30:02 -0000 The following reply was made to PR kern/186515; it has been noted by GNATS. From: John Baldwin To: freebsd-amd64@freebsd.org Cc: Yeong.Hun@freebsd.org, Jo , freebsd-gnats-submit@freebsd.org Subject: Re: amd64/186515: Doesn't boot with GPT when # of entries over than 128. Date: Thu, 6 Feb 2014 13:41:04 -0500 On Thursday, February 06, 2014 12:43:48 pm Yeong.Hun@freebsd.org, Jo wrote: > > >Number: 186515 > >Category: amd64 > >Synopsis: Doesn't boot with GPT when # of entries over than 128. > >Confidential: no > >Severity: non-critical > >Priority: low > >Responsible: freebsd-amd64 > >State: open > >Quarter: > >Keywords: > >Date-Required: > >Class: sw-bug > >Submitter-Id: current-users > >Arrival-Date: Thu Feb 06 17:50:00 UTC 2014 > >Closed-Date: > >Last-Modified: > >Originator: Yeong Hun, Jo > >Release: FreeBSD 10.0-RELEASE > >Organization: > - > >Environment: > FreeBSD localhost 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 1 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 > >Description: > I tried to making USB memory stick using GPT partition table scheme. I usually use 152 entries to aligning 4KB boundary for data disk(start location) and use 156 entries to aligning 4KB boundary for system disk(end location, misalignment of start location is used for boot code and make 4KB aligned root partition). I don't like that some "free" sectors on disk :-) > > But, it failed to boot when partition entry count is adjusted to more than 128. > > * WORKS : 128 entries GPT(1st usable sector = 34) with freebsd-boot partition at sector 34, 40. > > * DOESN'T WORK : 152 entries GPT(1st usable sector = 40) with freebsd-boot partition at sector 40, 156 entries GPT(1st usable sector = 41) with freebsd- boot partition at sector 41. > > > > Yes, There's no problem with default size GPT partition table. "128 entires" - minimum entry count by spec. - seems to be sufficient at most cases. But, that can be arbitrary size and should be supported even that cases. I think there's some issue on gpart or early-stage boot loader(/boot/pmbr). > > >How-To-Repeat: > * For example, USB disk is da0 here. > > # gpart create -s gpt -n 152 da0 > # gpart add -t freebsd-boot -b 40 -s 32 -i 1 da0 > # gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 da0 > > and try to boot USB. It should be show "clockwise" loading screen even no freebsd-ufs partition on USB, but it doesn't show anything and reboot immediately. It(doesn't say anything and reboot) occurs with /boot populated freebsd-ufs partition, too. Using more entries to pad out the table isn't the normal way to handle 4k alignment. You can just leave a gap before the start of freebsd-boot. Having the sectors "free" vs having them contain zero'd GPT entries doesn't really make a difference. One question is when does the boot break? Does it make it into the loader and break trying to boot the kernel? Does it make it into gptboot and break trying to load the loader? -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Fri Feb 7 08:35:31 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 746FAC6E for ; Fri, 7 Feb 2014 08:35:31 +0000 (UTC) Received: from mail-qa0-x230.google.com (mail-qa0-x230.google.com [IPv6:2607:f8b0:400d:c00::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 3142E112B for ; Fri, 7 Feb 2014 08:35:31 +0000 (UTC) Received: by mail-qa0-f48.google.com with SMTP id f11so4722418qae.35 for ; Fri, 07 Feb 2014 00:35:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=hu1lba1KLi+JahO4auC+I33FZkECS+nl1GY3gMljjC0=; b=FZBbsJ27teen3QXbKFJZizFpIS8goIPaph8FQ7eIWSg7nnU+Urtrh07iDouVimhztY 9VkvisC+jLM5c0YqeGHJEy4cQa8BCIMuGL3Ivkzxb1aj8s/xuKOe8KDY4S51wUkanxs6 gLIOLV2HNPwVkl8uzqEkb82U+owUHCARKCXA3Tuq3vJa7MUANIeGbqvPIdlzHEfhb9K2 +PBZZJJxcKHDq9HBVsL+7v27JcUo9GeO8uyDtcuhIgpk2DdAlHqbkRUc8HXzjl/pjGDp PwIRPlQ8UpEacDbtfkjjJrkP2lFOcHXgn4lIz+I2nrPo6aL6xO5IwrjxEIrDk4fc/a/V WQ/Q== MIME-Version: 1.0 X-Received: by 10.224.40.130 with SMTP id k2mr19445068qae.91.1391762130282; Fri, 07 Feb 2014 00:35:30 -0800 (PST) Received: by 10.96.37.227 with HTTP; Fri, 7 Feb 2014 00:35:30 -0800 (PST) In-Reply-To: <94A20D8E-292D-47B4-8D82-61A131B3010D@gmail.com> References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> <52F24DEA.9090905@physics.umn.edu> <94A20D8E-292D-47B4-8D82-61A131B3010D@gmail.com> Date: Fri, 7 Feb 2014 08:35:30 +0000 Message-ID: Subject: Re: practical maximum number of drives From: krad To: aurfalien Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Feb 2014 08:35:31 -0000 im confused by all this, do you need massive storage, lots or redundancy or just plain speed? If its redundancy, you kind of messed that up by going of one controller. On 5 February 2014 18:45, aurfalien wrote: > Ah great info many thanks. > > And pplz, ignore my reply to Daniel as I got the posts confused. I > recently switched to Sanka :) > > - aurf > > On Feb 5, 2014, at 6:42 AM, Graham Allan wrote: > > > > > > > On 2/4/2014 11:36 PM, aurfalien wrote: > >> Hi Graham, > >> > >> When you say behaved better with 1 HBA, what were the issues that > >> made you go that route? > > > > It worked fine in general with 3 HBAs for a while but OTOH 2 of the > drive chassis were being very lightly used (and note I was being quite > conservative and keeping each chassis as an independent zfs pool). > > > > Actual problems occurred once while I was away but our notes show we got > some kind of repeated i/o deadlock. As well as all drive i/o stopping, we > also couldn't use the sg_ses utilities to query the enclosures. This > reoccurred several times after restarts throughout the day, and eventually > "we" (again i wasn't here) removed the extra HBAs and daisy-chained all the > chassis together. An inspired hunch, I guess. No issues since then. > > > > Coincidentally a few days later I saw a message on this list from Xin Li > "Re: kern/177536: [zfs] zfs livelock (deadlock) with high write-to-disk > load": > > > > One problem we found in field that is not easy to reproduce is that > > there is a lost interrupt issue in FreeBSD core. This was fixed in > > r253184 (post-9.1-RELEASE and before 9.2, the fix will be part of the > > upcoming FreeBSD 9.2-RELEASE): > > > > > http://svnweb.freebsd.org/base/stable/9/sys/kern/kern_intr.c?r1=249402&r2=253184&view=patch > > > > The symptom of this issue is that you basically see a lot of processes > > blocking on zio->zio_cv, while there is no disk activity. However, > > the information you have provided can neither prove or deny my guess. > > I post the information here so people are aware of this issue if they > > search these terms. > > > > Something else suggested to me that multiple mps adapters would make > this worse but I'm not quite sure what. This issue wouldn't exist after 9.1 > anyway. > > > >> Also, curious that you have that many drives on 1 PCI card, is it PCI > >> 3 etc... and is saturation an issue? > > > > Pretty sure it's PCIe 2.x but we haven't seen any saturation issues. > That was of course the motivation for using separate HBAs in the initial > design but it was more of a hypothetical concern than a real one - at least > given our use pattern at present. This is more backing storage, the more > intensive i/o usually goes to a hadoop filesystem. > > > > Graham > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri Feb 7 12:43:04 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CDE2A675; Fri, 7 Feb 2014 12:43:04 +0000 (UTC) Received: from forward10.mail.yandex.net (forward10.mail.yandex.net [IPv6:2a02:6b8:0:202::5]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7587B1773; Fri, 7 Feb 2014 12:43:04 +0000 (UTC) Received: from smtp7.mail.yandex.net (smtp7.mail.yandex.net [77.88.61.55]) by forward10.mail.yandex.net (Yandex) with ESMTP id B765A10208C4; Fri, 7 Feb 2014 16:42:52 +0400 (MSK) Received: from smtp7.mail.yandex.net (localhost [127.0.0.1]) by smtp7.mail.yandex.net (Yandex) with ESMTP id 7B4151580082; Fri, 7 Feb 2014 16:42:52 +0400 (MSK) Received: from 95.108.170.136-red.dhcp.yndx.net (95.108.170.136-red.dhcp.yndx.net [95.108.170.136]) by smtp7.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id wKQuXTYL5R-gq8KSqnp; Fri, 7 Feb 2014 16:42:52 +0400 (using TLSv1 with cipher CAMELLIA256-SHA (256/256 bits)) (Client certificate not present) X-Yandex-Uniq: 8fd7ed8c-0114-4241-8e78-1afadb582079 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1391776972; bh=6dlXSryU1X9hdr9OiKU8QRO09hsGGmDjM9z3f3eRNAs=; h=Message-ID:Date:From:User-Agent:MIME-Version:To:Subject: References:In-Reply-To:X-Enigmail-Version:Content-Type: Content-Transfer-Encoding; b=vIy10noD6RuYe4F7ZrcNVUDaeY1rNwLlEzz+BwKC/j9nlUPTns9nIhZPxTSutHI7Q jCXEYFYbuwuIe/oUga5arfxJh67FR8SH04T6mYqzeKZpOnFpJzY5RFWIcCiGMU2Iw0 ey5b5jQzoMKRKI5mIqHUWuk2y1UMRGCBptNCVI9s= Authentication-Results: smtp7.mail.yandex.net; dkim=pass header.i=@yandex.ru Message-ID: <52F4D4C9.3060902@yandex.ru> Date: Fri, 07 Feb 2014 16:42:49 +0400 From: "Andrey V. Elsukov" User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: John Baldwin , freebsd-fs@FreeBSD.org Subject: Re: amd64/186515: Doesn't boot with GPT when # of entries over than 128. References: <201402061930.s16JU2Pi052495@freefall.freebsd.org> In-Reply-To: <201402061930.s16JU2Pi052495@freefall.freebsd.org> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Feb 2014 12:43:04 -0000 On 06.02.2014 23:30, John Baldwin wrote: > Using more entries to pad out the table isn't the normal way to handle 4k > alignment. You can just leave a gap before the start of freebsd-boot. Having > the sectors "free" vs having them contain zero'd GPT entries doesn't really > make a difference. One question is when does the boot break? Does it make it > into the loader and break trying to boot the kernel? Does it make it into > gptboot and break trying to load the loader? Hi John, this is gptboot's restriction. Look at the sys/boot/common/gpt.c. -- WBR, Andrey V. Elsukov From owner-freebsd-fs@FreeBSD.ORG Fri Feb 7 15:21:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 93BFB3E5 for ; Fri, 7 Feb 2014 15:21:18 +0000 (UTC) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 643511847 for ; Fri, 7 Feb 2014 15:21:17 +0000 (UTC) Received: from c-66-41-25-68.hsd1.mn.comcast.net ([66.41.25.68] helo=[192.168.0.138]) by mail.physics.umn.edu with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1WBnEp-0008Y7-PX; Fri, 07 Feb 2014 09:21:10 -0600 Message-ID: <52F4F9DA.4050309@physics.umn.edu> Date: Fri, 07 Feb 2014 09:20:58 -0600 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: krad References: <52F1BDA4.6090504@physics.umn.edu> <7D20F45E-24BC-4595-833E-4276B4CDC2E3@gmail.com> <52F24DEA.9090905@physics.umn.edu> <94A20D8E-292D-47B4-8D82-61A131B3010D@gmail.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=unavailable version=3.3.2 Subject: Re: practical maximum number of drives X-SA-Exim-Version: 4.2 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Feb 2014 15:21:18 -0000 Was that question for me? Yes it is less redundant, but having a single HBA isn't so much worse than having the single server. I think we're after (1) massive space, (2) speed, (3) low cost, ahead of redundancy. True redundancy would need something much more elaborate - maybe using SAS drives instead of SATA to permit multiple paths, for one thing. On 2/7/2014 2:35 AM, krad wrote: > im confused by all this, do you need massive storage, lots or redundancy > or just plain speed? If its redundancy, you kind of messed that up by > going of one controller. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 7 16:24:45 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7CC7FABE; Fri, 7 Feb 2014 16:24:45 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 30C851E4E; Fri, 7 Feb 2014 16:24:45 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.7/8.14.7) with ESMTP id s17GOibW006600; Fri, 7 Feb 2014 09:24:44 -0700 (MST) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.7/8.14.7/Submit) with ESMTP id s17GOh2l006597; Fri, 7 Feb 2014 09:24:44 -0700 (MST) (envelope-from wblock@wonkity.com) Date: Fri, 7 Feb 2014 09:24:43 -0700 (MST) From: Warren Block To: "Andrey V. Elsukov" Subject: Re: amd64/186515: Doesn't boot with GPT when # of entries over than 128. In-Reply-To: <52F4D4C9.3060902@yandex.ru> Message-ID: References: <201402061930.s16JU2Pi052495@freefall.freebsd.org> <52F4D4C9.3060902@yandex.ru> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Fri, 07 Feb 2014 09:24:44 -0700 (MST) Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Feb 2014 16:24:45 -0000 On Fri, 7 Feb 2014, Andrey V. Elsukov wrote: > On 06.02.2014 23:30, John Baldwin wrote: >> Using more entries to pad out the table isn't the normal way to handle 4k >> alignment. You can just leave a gap before the start of freebsd-boot. Having >> the sectors "free" vs having them contain zero'd GPT entries doesn't really >> make a difference. One question is when does the boot break? Does it make it >> into the loader and break trying to boot the kernel? Does it make it into >> gptboot and break trying to load the loader? > > Hi John, > > this is gptboot's restriction. Look at the sys/boot/common/gpt.c. It is mentioned at the start of gptboot(8) under Implementation Notes, too. Alignment of freebsd-boot is usually not very important. It is only rarely written, and the bootcode is so small that it will probably not take appreciably longer to read or write even when misaligned. Filesystem partitions are where alignment really matters. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 7 17:18:33 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 975871C9 for ; Fri, 7 Feb 2014 17:18:33 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [IPv6:2001:470:1f11:75::1]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6D8021450 for ; Fri, 7 Feb 2014 17:18:33 +0000 (UTC) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 92B4FB94B; Fri, 7 Feb 2014 12:18:32 -0500 (EST) From: John Baldwin To: Warren Block Subject: Re: amd64/186515: Doesn't boot with GPT when # of entries over than 128. Date: Fri, 7 Feb 2014 12:18:19 -0500 User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; ) References: <201402061930.s16JU2Pi052495@freefall.freebsd.org> <52F4D4C9.3060902@yandex.ru> In-Reply-To: MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <201402071218.19221.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Fri, 07 Feb 2014 12:18:32 -0500 (EST) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Feb 2014 17:18:33 -0000 On Friday, February 07, 2014 11:24:43 am Warren Block wrote: > On Fri, 7 Feb 2014, Andrey V. Elsukov wrote: > > > On 06.02.2014 23:30, John Baldwin wrote: > >> Using more entries to pad out the table isn't the normal way to handle 4k > >> alignment. You can just leave a gap before the start of freebsd-boot. Having > >> the sectors "free" vs having them contain zero'd GPT entries doesn't really > >> make a difference. One question is when does the boot break? Does it make it > >> into the loader and break trying to boot the kernel? Does it make it into > >> gptboot and break trying to load the loader? > > > > Hi John, > > > > this is gptboot's restriction. Look at the sys/boot/common/gpt.c. > > It is mentioned at the start of gptboot(8) under Implementation Notes, > too. > > Alignment of freebsd-boot is usually not very important. It is only > rarely written, and the bootcode is so small that it will probably not > take appreciably longer to read or write even when misaligned. > Filesystem partitions are where alignment really matters. We could at least emit an error message when this happens instead of blowing up. Ah, I think the problem is that gptboot tries to return from main() which it shouldn't do. This was introduced a while ago when the GPT code was rototilled. See if this patch forces an error and then drops to a prompt rather than a silent reboot: Index: sys/boot/i386/gptboot/gptboot.c =================================================================== --- sys/boot/i386/gptboot/gptboot.c (revision 261528) +++ sys/boot/i386/gptboot/gptboot.c (working copy) @@ -156,7 +156,7 @@ /* Process configuration file */ if (gptinit() != 0) - return (-1); + goto prompt; autoboot = 1; *cmd = '\0'; @@ -204,6 +204,7 @@ /* Present the user with the boot2 prompt. */ +prompt: for (;;) { if (!OPT_CHECK(RBX_QUIET)) { printf("\nFreeBSD/x86 boot\n" -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Fri Feb 7 20:44:54 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AD75E9AE for ; Fri, 7 Feb 2014 20:44:54 +0000 (UTC) Received: from internet06.ebureau.com (internet06.ebureau.com [65.127.24.25]) by mx1.freebsd.org (Postfix) with ESMTP id 870BF1708 for ; Fri, 7 Feb 2014 20:44:54 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by internet06.ebureau.com (Postfix) with ESMTP id 0481E9F1840 for ; Fri, 7 Feb 2014 14:44:48 -0600 (CST) X-Virus-Scanned: amavisd-new at ebureau.com Received: from internet06.ebureau.com ([127.0.0.1]) by localhost (internet06.ebureau.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id c4gRZjltaje5 for ; Fri, 7 Feb 2014 14:44:47 -0600 (CST) Received: from square.office.ebureau.com (square.office.ebureau.com [10.10.20.22]) by internet06.ebureau.com (Postfix) with ESMTPSA id 8FF249F1831 for ; Fri, 7 Feb 2014 14:44:47 -0600 (CST) From: Dustin Wenz Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Using the *real* sector/block size of a mass storage device for ZFS Message-Id: <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> Date: Fri, 7 Feb 2014 14:44:47 -0600 To: "" Mime-Version: 1.0 (Mac OS X Mail 7.0 \(1811\)) X-Mailer: Apple Mail (2.1811) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Feb 2014 20:44:54 -0000 We have been upgrading systems from FreeBSD 9.2 to 10.0-RELEASE, and I'm = noticing that all of my zpools now show this status: "One or more = devices are configured to use a non-native block size. Expect reduced = performance." Specifically, each disk reports: "block size: 512B = configured, 4096B native". I've checked these disks with diskinfo and smartctl, and they report a = sector size of 512B. I understand that modern disks often use larger = sectors due to addressing limits, but I'm unsure how ZFS can disagree = with these other tools. In any case, it looks like I will need to rebuild every zpool. There are = many thousands of disks involved and the process will take months (if = not years). How can I be I sure that this is done correctly this time? = Will ZFS automatically choose the correct block size, assuming that it's = really capable of this? In the meantime, how can I turn off that warning message on all of my = disks? "zpool status -x" is almost worthless due to the extreme number = of errors reported. - .Dustin From owner-freebsd-fs@FreeBSD.ORG Fri Feb 7 21:23:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8B57DA63 for ; Fri, 7 Feb 2014 21:23:23 +0000 (UTC) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 58C7B1A9C for ; Fri, 7 Feb 2014 21:23:23 +0000 (UTC) Received: from compute3.internal (compute3.nyi.mail.srv.osa [10.202.2.43]) by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 2D25A20C62 for ; Fri, 7 Feb 2014 16:23:16 -0500 (EST) Received: from web3 ([10.202.2.213]) by compute3.internal (MEProxy); Fri, 07 Feb 2014 16:23:16 -0500 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=message-id:from:to:mime-version :content-transfer-encoding:content-type:subject:date:in-reply-to :references; s=smtpout; bh=S62BisW471QBwqfh20Il+ufMNJM=; b=J4pez FCkZkhtldVOHoVzSu09fl8iCrDQXzD3bzXMdMFk5i2Eaw8ml5gNUvrjK2TBCw4EW mBBUyoAffrwf+ln+bdW5HsqsNyiCowIPV476gEHGe2tdLr93pgMpjebQH1BEtvFg QrqmcqWlotwpVC0FZ5SjHKbRIfiJlBFTgfm86E= Received: by web3.nyi.mail.srv.osa (Postfix, from userid 99) id 08E00185843; Fri, 7 Feb 2014 16:23:16 -0500 (EST) Message-Id: <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> X-Sasl-Enc: pO3ZcAHJ1l6rM9fyeYM9DEQxgDI0lMigNiMU2Zdl8KR5 1391808195 From: Mark Felder To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain X-Mailer: MessagingEngine.com Webmail Interface - ajax-e72899be Subject: Re: Using the *real* sector/block size of a mass storage device for ZFS Date: Fri, 07 Feb 2014 15:23:15 -0600 In-Reply-To: <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> References: <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Feb 2014 21:23:23 -0000 On Fri, Feb 7, 2014, at 14:44, Dustin Wenz wrote: > We have been upgrading systems from FreeBSD 9.2 to 10.0-RELEASE, and I'm > noticing that all of my zpools now show this status: "One or more devices > are configured to use a non-native block size. Expect reduced > performance." Specifically, each disk reports: "block size: 512B > configured, 4096B native". > > I've checked these disks with diskinfo and smartctl, and they report a > sector size of 512B. I understand that modern disks often use larger > sectors due to addressing limits, but I'm unsure how ZFS can disagree > with these other tools. > > In any case, it looks like I will need to rebuild every zpool. There are > many thousands of disks involved and the process will take months (if not > years). How can I be I sure that this is done correctly this time? Will > ZFS automatically choose the correct block size, assuming that it's > really capable of this? > > In the meantime, how can I turn off that warning message on all of my > disks? "zpool status -x" is almost worthless due to the extreme number of > errors reported. > ZFS is doing the right thing by telling you that you should expect degraded performance. The best way to fix this is to use the gnop method when you build your zpools: gnop create -S 4096 /dev/da0 gnop create -S 4096 /dev/da1 zpool create data mirror /dev/da0.nop /dev/da1.nop Next reboot or import of the zpool will use the regular device names with the correct ashift for 4K drives. The drive manufacturers handled this transition extremely poorly. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 7 22:19:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 66442C07; Fri, 7 Feb 2014 22:19:17 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E93701FA2; Fri, 7 Feb 2014 22:19:16 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.7/8.14.7) with ESMTP id s17MJFMB022458; Fri, 7 Feb 2014 15:19:15 -0700 (MST) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.7/8.14.7/Submit) with ESMTP id s17MJFXP022455; Fri, 7 Feb 2014 15:19:15 -0700 (MST) (envelope-from wblock@wonkity.com) Date: Fri, 7 Feb 2014 15:19:15 -0700 (MST) From: Warren Block To: Mark Felder Subject: Re: Using the *real* sector/block size of a mass storage device for ZFS In-Reply-To: <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> Message-ID: References: <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Fri, 07 Feb 2014 15:19:15 -0700 (MST) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Feb 2014 22:19:17 -0000 On Fri, 7 Feb 2014, Mark Felder wrote: > On Fri, Feb 7, 2014, at 14:44, Dustin Wenz wrote: >> We have been upgrading systems from FreeBSD 9.2 to 10.0-RELEASE, and I'm >> noticing that all of my zpools now show this status: "One or more devices >> are configured to use a non-native block size. Expect reduced >> performance." Specifically, each disk reports: "block size: 512B >> configured, 4096B native". >> >> I've checked these disks with diskinfo and smartctl, and they report a >> sector size of 512B. I understand that modern disks often use larger >> sectors due to addressing limits, but I'm unsure how ZFS can disagree >> with these other tools. >> >> In any case, it looks like I will need to rebuild every zpool. There are >> many thousands of disks involved and the process will take months (if not >> years). How can I be I sure that this is done correctly this time? Will >> ZFS automatically choose the correct block size, assuming that it's >> really capable of this? >> >> In the meantime, how can I turn off that warning message on all of my >> disks? "zpool status -x" is almost worthless due to the extreme number of >> errors reported. >> > > ZFS is doing the right thing by telling you that you should expect > degraded performance. The best way to fix this is to use the gnop method > when you build your zpools: > > gnop create -S 4096 /dev/da0 > gnop create -S 4096 /dev/da1 > zpool create data mirror /dev/da0.nop /dev/da1.nop > > Next reboot or import of the zpool will use the regular device names > with the correct ashift for 4K drives. But remember that this does not fix alignment, and if the partitions are not aligned with 4K blocks, at least write performance will suffer. > The drive manufacturers handled this transition extremely poorly. They may have been forced by desiring compatibility with all the systems that expect 512-byte blocks. :) From owner-freebsd-fs@FreeBSD.ORG Fri Feb 7 22:41:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 67D28335; Fri, 7 Feb 2014 22:41:58 +0000 (UTC) Received: from internet06.ebureau.com (internet06.ebureau.com [65.127.24.25]) by mx1.freebsd.org (Postfix) with ESMTP id 3C44211BA; Fri, 7 Feb 2014 22:41:57 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by internet06.ebureau.com (Postfix) with ESMTP id 1E1509F4595; Fri, 7 Feb 2014 16:41:57 -0600 (CST) X-Virus-Scanned: amavisd-new at ebureau.com Received: from internet06.ebureau.com ([127.0.0.1]) by localhost (internet06.ebureau.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id BjDHoiRgwRjv; Fri, 7 Feb 2014 16:41:56 -0600 (CST) Received: from square.office.ebureau.com (square.office.ebureau.com [10.10.20.22]) by internet06.ebureau.com (Postfix) with ESMTPSA id AB77D9F458A; Fri, 7 Feb 2014 16:41:56 -0600 (CST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 7.0 \(1811\)) Subject: Re: Using the *real* sector/block size of a mass storage device for ZFS From: Dustin Wenz In-Reply-To: <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> Date: Fri, 7 Feb 2014 16:41:56 -0600 Content-Transfer-Encoding: quoted-printable Message-Id: <8B5D8D0C-ADDE-49B3-87A9-DE1105E32BF9@ebureau.com> References: <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> To: Mark Felder X-Mailer: Apple Mail (2.1811) Cc: "" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Feb 2014 22:41:58 -0000 Thanks for the information! I'm curious as to why gnop is the best way to accomplish this... FreeBSD = 10 seems to automatically set ashift: 12 when a new vdev is created. I = definitely appreciate the control that gnop provides, however. Am I correct in assuming that it is absolutely impossible to convert an = existing ashift:9 vdev to ashift:12? Some of my pools are approaching = 1PB in size; transferring the data off and back again would be = inconvenient. I suppose I should just be thankful that ZFS is warning me about this = now, before I need to build any really large storage pools.=20 - .Dustin On Feb 7, 2014, at 3:23 PM, Mark Felder wrote: >=20 >=20 > On Fri, Feb 7, 2014, at 14:44, Dustin Wenz wrote: >> We have been upgrading systems from FreeBSD 9.2 to 10.0-RELEASE, and = I'm >> noticing that all of my zpools now show this status: "One or more = devices >> are configured to use a non-native block size. Expect reduced >> performance." Specifically, each disk reports: "block size: 512B >> configured, 4096B native". >>=20 >> I've checked these disks with diskinfo and smartctl, and they report = a >> sector size of 512B. I understand that modern disks often use larger >> sectors due to addressing limits, but I'm unsure how ZFS can disagree >> with these other tools. >>=20 >> In any case, it looks like I will need to rebuild every zpool. There = are >> many thousands of disks involved and the process will take months (if = not >> years). How can I be I sure that this is done correctly this time? = Will >> ZFS automatically choose the correct block size, assuming that it's >> really capable of this? >>=20 >> In the meantime, how can I turn off that warning message on all of my >> disks? "zpool status -x" is almost worthless due to the extreme = number of >> errors reported. >>=20 >=20 > ZFS is doing the right thing by telling you that you should expect > degraded performance. The best way to fix this is to use the gnop = method > when you build your zpools: >=20 > gnop create -S 4096 /dev/da0 > gnop create -S 4096 /dev/da1 > zpool create data mirror /dev/da0.nop /dev/da1.nop >=20 > Next reboot or import of the zpool will use the regular device names > with the correct ashift for 4K drives. >=20 > The drive manufacturers handled this transition extremely poorly. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Feb 7 22:50:39 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 38659756 for ; Fri, 7 Feb 2014 22:50:39 +0000 (UTC) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 057541204 for ; Fri, 7 Feb 2014 22:50:38 +0000 (UTC) Received: from compute2.internal (compute2.nyi.mail.srv.osa [10.202.2.42]) by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 9F80D20EA8; Fri, 7 Feb 2014 17:50:37 -0500 (EST) Received: from web3 ([10.202.2.213]) by compute2.internal (MEProxy); Fri, 07 Feb 2014 17:50:37 -0500 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=message-id:from:to:cc:mime-version :content-transfer-encoding:content-type:subject:date:in-reply-to :references; s=smtpout; bh=z4SdapWDYz5Cmge4b5pjfiLGQUU=; b=YiEgj 4YHLDZItjw3YN13/G2PycYqomQh8XH6GUVWSq14dyJCmyLWnt5FOnLr9xFeMIBZi UFq9Fsdk8kIj8yKbFE0btgcQ80Kabr9P4m9W0oKS0s6RFKlxI5NCLCBfOb7NXFwR +6fDVXMnfFaWyGLVn9/gUEBwn8IfymwWePDpII= Received: by web3.nyi.mail.srv.osa (Postfix, from userid 99) id 4F56210E639; Fri, 7 Feb 2014 17:50:37 -0500 (EST) Message-Id: <1391813437.29897.80736933.1F6388D0@webmail.messagingengine.com> X-Sasl-Enc: 1Y2PeOy6LBENceYS7uCCqy3jEjgO0UWAjS6K4ftThVz6 1391813437 From: Mark Felder To: Warren Block MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain X-Mailer: MessagingEngine.com Webmail Interface - ajax-e72899be Subject: Re: Using the *real* sector/block size of a mass storage device for ZFS Date: Fri, 07 Feb 2014 16:50:37 -0600 In-Reply-To: References: <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Feb 2014 22:50:39 -0000 On Fri, Feb 7, 2014, at 16:19, Warren Block wrote: > On Fri, 7 Feb 2014, Mark Felder wrote: > > > On Fri, Feb 7, 2014, at 14:44, Dustin Wenz wrote: > >> We have been upgrading systems from FreeBSD 9.2 to 10.0-RELEASE, and I'm > >> noticing that all of my zpools now show this status: "One or more devices > >> are configured to use a non-native block size. Expect reduced > >> performance." Specifically, each disk reports: "block size: 512B > >> configured, 4096B native". > >> > >> I've checked these disks with diskinfo and smartctl, and they report a > >> sector size of 512B. I understand that modern disks often use larger > >> sectors due to addressing limits, but I'm unsure how ZFS can disagree > >> with these other tools. > >> > >> In any case, it looks like I will need to rebuild every zpool. There are > >> many thousands of disks involved and the process will take months (if not > >> years). How can I be I sure that this is done correctly this time? Will > >> ZFS automatically choose the correct block size, assuming that it's > >> really capable of this? > >> > >> In the meantime, how can I turn off that warning message on all of my > >> disks? "zpool status -x" is almost worthless due to the extreme number of > >> errors reported. > >> > > > > ZFS is doing the right thing by telling you that you should expect > > degraded performance. The best way to fix this is to use the gnop method > > when you build your zpools: > > > > gnop create -S 4096 /dev/da0 > > gnop create -S 4096 /dev/da1 > > zpool create data mirror /dev/da0.nop /dev/da1.nop > > > > Next reboot or import of the zpool will use the regular device names > > with the correct ashift for 4K drives. > > But remember that this does not fix alignment, and if the partitions are > not aligned with 4K blocks, at least write performance will suffer. > I often use raw devices ever since ZFS has had the ability to tolerate slight differences in disk sizes. From owner-freebsd-fs@FreeBSD.ORG Sat Feb 8 00:10:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 371E548B for ; Sat, 8 Feb 2014 00:10:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 1DA0B17DB for ; Sat, 8 Feb 2014 00:10:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id s180A0QB002622 for ; Sat, 8 Feb 2014 00:10:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s180A02J002621; Sat, 8 Feb 2014 00:10:00 GMT (envelope-from gnats) Date: Sat, 8 Feb 2014 00:10:00 GMT Message-Id: <201402080010.s180A02J002621@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: =?UTF-8?B?7KGw7JiB7ZuI?= Subject: Re: amd64/186515: Doesn't boot with GPT when # of entries over than 128. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: =?UTF-8?B?7KGw7JiB7ZuI?= List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 08 Feb 2014 00:10:01 -0000 The following reply was made to PR kern/186515; it has been noted by GNATS. From: =?UTF-8?B?7KGw7JiB7ZuI?= To: bug-followup@FreeBSD.org Cc: Subject: Re: amd64/186515: Doesn't boot with GPT when # of entries over than 128. Date: Sat, 8 Feb 2014 09:07:43 +0900 --001a11c109c445bcfd04f1d9e525 Content-Type: text/plain; charset=UTF-8 That's right. Actually there's no any differences - between "free" sectors to "empty entries". In most cases 128 entries will be sufficient I said before. And I know that's not preferred way to align partitions. But, GRUB2 works even that cases. Of course that isn't GNU extension. Because of there's no any differences except total entry count, I think it should be work. I think there's some problem loading freebsd-boot partition. When I selected USB stick at boot menu, I can't see anything except black screen and it reboots immediately. If it boots correctly, it should display loading screen - rotating animation - and (if freebsd-ufs partition exists and /boot populated) Greeting message such as "BTX Loader 1.00 BTX Version is 1.02", then "Welcome to FreeBSD". I checked that behavior with work cases(128 entries, different boot partition's start location), and I found that it should display "rotating animation" even there's not exist later-step bootloader files in /boot; In that case, it automatically reboots after shows that animation because there's no freebsd-ufs partition. * I'm first use of FreeBSD's bug reporting system and it causes some confusion to me. I replied same mail to John, but not bug-followup system and I guess it is what my reply doesn't seen on bug tracking system. I hope that send reply mail to this address is right... --001a11c109c445bcfd04f1d9e525 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
That's right. Actually there's no any di= fferences - between "free"=20 sectors to "empty entries". In most cases 128 entries will be suf= ficient I said before. And I know that's not preferred way to align partitions= . But, GRUB2 works even that cases. Of course that isn't GNU extension.= =20 Because of there's no any differences except total entry count, I think= =20 it should be work.

I think there's some problem loading freebsd-boot=20 partition. When I selected USB stick at boot menu, I can't see anything= =20 except black screen and it reboots immediately. If it boots correctly,=20 it should display loading screen - rotating animation - and (if=20 freebsd-ufs partition exists and /boot populated) Greeting message such=20 as "BTX Loader 1.00 BTX Version is 1.02", then "Welcome to F= reeBSD". I=20 checked that behavior with work cases(128 entries, different boot=20 partition's start location), and I found that it should display=20 "rotating animation" even there's not exist later-step bootlo= ader files=20 in /boot; In that case, it automatically reboots after shows that=20 animation because there's no freebsd-ufs partition.



* I'm first use of FreeBSD's bug reporting system and it causes s= ome confusion to me. I replied same mail to John, but not bug-followup syst= em and I guess it is what my reply doesn't seen on bug tracking system.= I hope that send reply mail to this address is right...
--001a11c109c445bcfd04f1d9e525-- From owner-freebsd-fs@FreeBSD.ORG Sat Feb 8 01:47:09 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DBBBF66E; Sat, 8 Feb 2014 01:47:08 +0000 (UTC) Received: from sasl.smtp.pobox.com (a-pb-sasl-quonix.pobox.com [208.72.237.25]) by mx1.freebsd.org (Postfix) with ESMTP id 973E81E40; Sat, 8 Feb 2014 01:47:08 +0000 (UTC) Received: from sasl.smtp.pobox.com (unknown [127.0.0.1]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTP id C671F10BE5; Fri, 7 Feb 2014 20:43:00 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=pobox.com; h=date :message-id:from:to:cc:subject:in-reply-to:references :mime-version:content-type:content-transfer-encoding; s=sasl; bh=PJhpqqjF8va08pBXc0MDuxHpSzI=; b=pHxoM6P57UZ7OHerb1JOOe8tcWxV r4R0T3oKv6CLKmuBxBc/ExyyBf0yZxotMMitnpv87tVl6s+rR6SAWFvAgfpfHAXA nAHcwCN73z9xtJ9Rv4yAK1FRwN2OV13mtmRl/kh1+gKq94Wy8nYu5GFYEsw9vpsa VcDTTVEJkUGld3w= DomainKey-Signature: a=rsa-sha1; c=nofws; d=pobox.com; h=date:message-id :from:to:cc:subject:in-reply-to:references:mime-version :content-type:content-transfer-encoding; q=dns; s=sasl; b=lJcrPi Sk2ZCnbh27fD7IgauRvFBufrwqAwaAHNQgI+ZVwBXdEQxz5OJyWIQ4XyJP170txo nNlJMziVZ7EGZZifu1CwPyhjc2dX0ZM462q3PPrbGWjN5ShL2EKa1o1YEAzgr1iv bZPhg/ceOaQyTDNNfrS34LbL2XKURj0EtD1TI= Received: from a-pb-sasl-quonix.pobox.com (unknown [127.0.0.1]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTP id 9FE0F10BE4; Fri, 7 Feb 2014 20:43:00 -0500 (EST) Received: from bmach.nederware.nl (unknown [27.252.207.92]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTPA id AC1B210BE3; Fri, 7 Feb 2014 20:42:59 -0500 (EST) Received: from quadrio.nederware.nl (quadrio.nederware.nl [192.168.33.13]) by bmach.nederware.nl (Postfix) with ESMTP id C523E362E0; Sat, 8 Feb 2014 14:42:57 +1300 (NZDT) Received: from quadrio.nederware.nl (quadrio.nederware.nl [127.0.0.1]) by quadrio.nederware.nl (Postfix) with ESMTP id 7927F4045348; Sat, 8 Feb 2014 14:42:57 +1300 (NZDT) Date: Sat, 08 Feb 2014 14:42:57 +1300 Message-ID: <878utmxtum.wl%berend@pobox.com> From: Berend de Boer To: Dustin Wenz Subject: Re: Using the *real* sector/block size of a mass storage device for ZFS In-Reply-To: <8B5D8D0C-ADDE-49B3-87A9-DE1105E32BF9@ebureau.com> References: <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> <8B5D8D0C-ADDE-49B3-87A9-DE1105E32BF9@ebureau.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 EasyPG/1.0.0 Emacs/24.3 (i686-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) Organization: Xplain Technology Ltd MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: multipart/signed; boundary="pgp-sign-Multipart_Sat_Feb__8_14:42:57_2014-1"; micalg=pgp-sha256; protocol="application/pgp-signature" Content-Transfer-Encoding: 7bit X-Pobox-Relay-ID: 574D7504-9062-11E3-9C22-873F0E5B5709-48001098!a-pb-sasl-quonix.pobox.com Cc: "" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 08 Feb 2014 01:47:09 -0000 --pgp-sign-Multipart_Sat_Feb__8_14:42:57_2014-1 Content-Type: text/plain; charset=US-ASCII >>>>> "Dustin" == Dustin Wenz writes: Dustin> Am I correct in assuming that it is absolutely impossible Dustin> to convert an existing ashift:9 vdev to ashift:12? Some of Dustin> my pools are approaching 1PB in size; transferring the Dustin> data off and back again would be inconvenient. I thought you could do it one disk at a time (if you have a redundant pool). But maybe not. -- All the best, Berend de Boer --pgp-sign-Multipart_Sat_Feb__8_14:42:57_2014-1 Content-Type: application/pgp-signature Content-Transfer-Encoding: 7bit Content-Description: OpenPGP Digital Signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (GNU/Linux) iQIcBAABCAAGBQJS9YuhAAoJEKOfeD48G3g5C/oP/1W90ZJsC0IFOssfz9qnsEld CMbzJAqE5N42vNpkPBB3z8dWqed7B0aJXWozx5qaWUtDE5PcTjKJi7kHTJbEA9lV IvtlQBdopY4wga8O3TIU/9E5VrC4sXBo7YepCy9TQkNgLZ1GdwgWujACmUKuWkF4 YG6EfWom9e+p6yGfhHEaaViswVXXiHIDFdkNmUphHb/aHNXZg/CI+hhP6FdKvWpk chJ+4UGBc0Hq1by61UK1ySdOhaCHRQlX6FLJH0kt8bpAjjreMyPbzG1FjiwHhlM/ wi7dJUC1pu+7jn5XHKXakp0rcxgXOCFTuIu7mU9Ja8Uby8uFz/5MzlngwURmdOqz p10dDhW3a5vDCPFpoOJ33oTH+pWsooY+gh9vYA36OmJ3Cl8KiQKxnjWgknJfo4JY 0Nfo17BNJGMSWzG+aGGfszpERzgS1gYXpAjuaH7uyv1VLx+D8uK0udoALVTUR00X fFcBNCAF/RITSny59aq/MFl++vIakCsXd+vaHXrHSMY3xbU4tR5Gmb4/hiLm1Cb4 hAcPVJuyMPrB10RTyPUdjT+BoIOtKlw8VvosKWxSFSeohqiaZLMfpdewrRlvitMv m8RLv7xJpscxS4jcQhBzhttDS+9khhalmpSHCnVyF+SjOiH0oTpwBzqX29iiap1N +WfQPT0XhLHGzqdgwbPE =1+g3 -----END PGP SIGNATURE----- --pgp-sign-Multipart_Sat_Feb__8_14:42:57_2014-1-- From owner-freebsd-fs@FreeBSD.ORG Sat Feb 8 07:16:45 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 62CF8F4; Sat, 8 Feb 2014 07:16:45 +0000 (UTC) Received: from mail.jrv.org (rrcs-24-73-246-106.sw.biz.rr.com [24.73.246.106]) by mx1.freebsd.org (Postfix) with ESMTP id 360F61174; Sat, 8 Feb 2014 07:16:44 +0000 (UTC) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.jrv.org (Postfix) with ESMTP id 27B8C24A001; Sat, 8 Feb 2014 01:08:32 -0600 (CST) X-Virus-Scanned: amavisd-new at zimbra.housenet.jrv Received: from mail.jrv.org ([127.0.0.1]) by localhost (zimbra.housenet.jrv [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RsrpXWJZofZE; Sat, 8 Feb 2014 01:08:22 -0600 (CST) Received: from [192.168.23.128] (BMX.housenet.jrv [192.168.3.140]) by mail.jrv.org (Postfix) with ESMTPSA id 13C9D1EA1B0; Sat, 8 Feb 2014 01:08:22 -0600 (CST) Message-ID: <52F5D7E7.70006@jrv.org> Date: Sat, 08 Feb 2014 01:08:23 -0600 From: "James R. Van Artsdalen" User-Agent: Mozilla/5.0 (Windows NT 5.0; rv:12.0) Gecko/20120428 Thunderbird/12.0.1 MIME-Version: 1.0 To: "Andrey V. Elsukov" Subject: Re: amd64/186515: Doesn't boot with GPT when # of entries over than 128. References: <201402061930.s16JU2Pi052495@freefall.freebsd.org> <52F4D4C9.3060902@yandex.ru> In-Reply-To: <52F4D4C9.3060902@yandex.ru> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 08 Feb 2014 07:16:45 -0000 On 2/7/2014 6:42 AM, Andrey V. Elsukov wrote: > On 06.02.2014 23:30, John Baldwin wrote: >> Using more entries to pad out the table isn't the normal way to handle 4k >> alignment. You can just leave a gap before the start of freebsd-boot. Having >> the sectors "free" vs having them contain zero'd GPT entries doesn't really >> make a difference. One question is when does the boot break? Does it make it >> into the loader and break trying to boot the kernel? Does it make it into >> gptboot and break trying to load the loader? > Hi John, > > this is gptboot's restriction. Look at the sys/boot/common/gpt.c. > For the last couple of years every FreeBSD system I've installed has been partitioned by this command: bigtex:/root# grep -- -s.gpt mkdisk.sh gpart create -s gpt -n 152 "$DISK" || exit yielding something like bigback:/root# gpart show => 40 3907029089 ada0 GPT (1.8T) 40 128 1 freebsd-boot (64K) 168 29828864 2 freebsd-swap (14G) 29829032 3877200097 3 freebsd-zfs (1.8T) I guess the bug is unique to GPT UFS booting? From owner-freebsd-fs@FreeBSD.ORG Sat Feb 8 10:03:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7EC92158 for ; Sat, 8 Feb 2014 10:03:01 +0000 (UTC) Received: from mail-vc0-x22b.google.com (mail-vc0-x22b.google.com [IPv6:2607:f8b0:400c:c03::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 37FD41C99 for ; Sat, 8 Feb 2014 10:03:01 +0000 (UTC) Received: by mail-vc0-f171.google.com with SMTP id le5so3424267vcb.16 for ; Sat, 08 Feb 2014 02:02:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=0jUgPO7DEsdEOmsDeD7DbxZkKMcISPnxXTk8MeJVK8Q=; b=dcvX6e8dICHG2ahho+yASLqMO49DUMZTggM5NOBClDUUWd10B2a79XSinCCe4F/06Z IPstJSqub316ORKfCUSosGpYhNVnl5qmORKd171HDX4hOvjZLygOngnftRZJD3Wbngbi F/mavYY1aQqda0fldSe+grYZcmZwpb9LZu+HufGx/f0Tw0cDACYv5oSd1nHLMteMWgJR EykFpXZCwu+xzqR12iWoQSKBHhzJfHHaSD2+6OPQuRZSkNPnBavVMHmjx659AEh6oOeZ NqsiTY78J4dC6RVQ32c+5VOrYO0FeyZCPNYfrjmi/xRKC5ynS/uoNThR5oBREpyjmDAf Dmtw== MIME-Version: 1.0 X-Received: by 10.220.191.134 with SMTP id dm6mr14609710vcb.16.1391853779287; Sat, 08 Feb 2014 02:02:59 -0800 (PST) Received: by 10.58.128.132 with HTTP; Sat, 8 Feb 2014 02:02:59 -0800 (PST) In-Reply-To: <878utmxtum.wl%berend@pobox.com> References: <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> <8B5D8D0C-ADDE-49B3-87A9-DE1105E32BF9@ebureau.com> <878utmxtum.wl%berend@pobox.com> Date: Sat, 8 Feb 2014 11:02:59 +0100 Message-ID: Subject: Re: Using the *real* sector/block size of a mass storage device for ZFS From: Johan Hendriks To: Berend de Boer Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: "" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 08 Feb 2014 10:03:01 -0000 Op zaterdag 8 februari 2014 heeft Berend de Boer het volgende geschreven: > >>>>> "Dustin" == Dustin Wenz > > writes: > > Dustin> Am I correct in assuming that it is absolutely impossible > Dustin> to convert an existing ashift:9 vdev to ashift:12? Some of > Dustin> my pools are approaching 1PB in size; transferring the > Dustin> data off and back again would be inconvenient. > > I thought you could do it one disk at a time (if you have a redundant > pool). > > But maybe not. > > -- > All the best, > > Berend de Boer No that is not possible. The ashift is set when the pool is created, hence the fact that you only need the gnop method at pool creation time. If you add a vdev to the pool you do not need the gnop method anymore because you can not change it. You can align the disk so it has a 4 k alignement. But that is not the ashift of the pool I think FreeBSD 10 sees if the disk is capable for a ashift of 12 and therefor give you a warning. I have a FreeBSD 10 machine with two types of disks. WD RE and WD SE drives. If i create a pool with the RE drives, FreeBSD 10 will create it with a ashift of 9, so i need to use the gnop method to get a ashift of 12. If i create a pool with the SE drives, it automaticly uses a ashift of 12. I guess if i did create the pool with the SE drives on FreeBSD 9 with a ashift of 9 FreeBSD 10 would warn me with the warning you see. I use gpart to create a 4k disk alignement. # gpart create -s gpt da0 # gpart add -a 4k -t freebsd-zfs -l labelname da0 This way the disk has a 4k alignement. To get a asift of 12 on your pool you can use the gnop method, or you could try to create the pool without gnop and see if FreeBSD 10 detect the disk as a advanced format disk as it does for the WD SE drives, that way you do not need the gnop method. If FreeBSD 10 gives you a this warning, i am almost certain you do not need the gnop method on those disks. Regards Johan Hendriks From owner-freebsd-fs@FreeBSD.ORG Sat Feb 8 16:28:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id ACA4070E for ; Sat, 8 Feb 2014 16:28:32 +0000 (UTC) Received: from mail-oa0-x22b.google.com (mail-oa0-x22b.google.com [IPv6:2607:f8b0:4003:c02::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6C56F198F for ; Sat, 8 Feb 2014 16:28:32 +0000 (UTC) Received: by mail-oa0-f43.google.com with SMTP id h16so5714737oag.2 for ; Sat, 08 Feb 2014 08:28:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=phXN8e1ukbOB+HsCr1WpapIdOGmyp0cv08PAWyQy81o=; b=FOKaA7lpYxjUNMMzylSul9jx5arFr6jHxU4cYiuwsPRGfAhmf0inut2c7Ano7NxE7E 91ig294D/EOBzBAsi1C2c/9bk5c96Z5RGOyMpqV8SzduCVIWFLP6TdfCw/+VOl/CBlXW fzB3JQxjhjMZjL/QqpsbcDWwleaGfC9qNEYh6+H7d8vcS2UTkKiccVMnJ2OvSfWFCLtB Z79rS/TuWIrmtxf+M5bUBSSalaI5FDtX1AcSzmulBm0uNUM0k2H7pwzdHmvaAI+nPeAr Uuv59hFdJ7/nbH9pGTF5zLQWGsUgQoJNHQnvirveFjIPe32GuETxXpR7+rnezoNKH/L4 qy+Q== MIME-Version: 1.0 X-Received: by 10.60.67.105 with SMTP id m9mr971603oet.58.1391876911536; Sat, 08 Feb 2014 08:28:31 -0800 (PST) Received: by 10.76.180.164 with HTTP; Sat, 8 Feb 2014 08:28:31 -0800 (PST) Received: by 10.76.180.164 with HTTP; Sat, 8 Feb 2014 08:28:31 -0800 (PST) In-Reply-To: References: <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> <8B5D8D0C-ADDE-49B3-87A9-DE1105E32BF9@ebureau.com> <878utmxtum.wl%berend@pobox.com> Date: Sat, 8 Feb 2014 08:28:31 -0800 Message-ID: Subject: Re: Using the *real* sector/block size of a mass storage device for ZFS From: Freddie Cash To: Johan Hendriks Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 08 Feb 2014 16:28:32 -0000 On Feb 8, 2014 2:03 AM, "Johan Hendriks" wrote: > > Op zaterdag 8 februari 2014 heeft Berend de Boer het > volgende geschreven: > > > >>>>> "Dustin" == Dustin Wenz > > > writes: > > > > Dustin> Am I correct in assuming that it is absolutely impossible > > Dustin> to convert an existing ashift:9 vdev to ashift:12? Some of > > Dustin> my pools are approaching 1PB in size; transferring the > > Dustin> data off and back again would be inconvenient. > > > > I thought you could do it one disk at a time (if you have a redundant > > pool). > > > > But maybe not. > > > > -- > > All the best, > > > > Berend de Boer > > > No that is not possible. > The ashift is set when the pool is created, hence the fact that you only > need the gnop method at pool creation time. If you add a vdev to the pool > you do not need the gnop method anymore because you can not change it. > You can align the disk so it has a 4 k alignement. But that is not the > ashift of the pool Correction: the ashift is set at the vdev level, when the vdev is created. You only need to use the gnop method on a single disk in a vdev as zfs uses the largest ashift of all drives in the vdev. And you need to do it anytime you add a new vdev to a pool. I believe there's a sysctl in FreeBSD 10 where you can set the minimum ashift level so you don't need to use the gnop method. From owner-freebsd-fs@FreeBSD.ORG Sat Feb 8 18:19:35 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7416272D for ; Sat, 8 Feb 2014 18:19:35 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E23EB12C4 for ; Sat, 8 Feb 2014 18:19:34 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id s18IJPjs053390; Sat, 8 Feb 2014 22:19:25 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Sat, 8 Feb 2014 22:19:25 +0400 (MSK) From: Dmitry Morozovsky To: Freddie Cash Subject: Re: Using the *real* sector/block size of a mass storage device for ZFS In-Reply-To: Message-ID: References: <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> <8B5D8D0C-ADDE-49B3-87A9-DE1105E32BF9@ebureau.com> <878utmxtum.wl%berend@pobox.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Sat, 08 Feb 2014 22:19:26 +0400 (MSK) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 08 Feb 2014 18:19:35 -0000 On Sat, 8 Feb 2014, Freddie Cash wrote: [snip] > I believe there's a sysctl in FreeBSD 10 where you can set the minimum > ashift level so you don't need to use the gnop method. Does not seem so, only the max one: marck@hamster:/FreeBSD> uname -a FreeBSD hamster.wpub.woozle.net 10.0-STABLE FreeBSD 10.0-STABLE #1 r261284: Thu Jan 30 14:06:02 MSK 2014 marck@hamster.wpub.woozle.net:/usr/obj/usr/src/sys/HAMSTER amd64 marck@hamster:/FreeBSD> sysctl -a | grep ashift vfs.zfs.max_auto_ashift: 13 -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------