Date: Thu, 02 Aug 2012 20:41:26 +0200 From: Attila Nagy <bra@fsn.hu> To: "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org> Subject: ZFS thinks that a newly inserted emtpy disk is part of the pool Message-ID: <501AC9D6.6080409@fsn.hu>
next in thread | raw e-mail | index | archive | help
Hi, What I do (did always on FreeBSD 8): - wait for a disk to malfunction (its SCSI device disappears) or when I know its bad (SMART info, checksum errors etc), pull it out from the enclosure - insert a new disk, straight from the shop (has a lot of null bytes on it) - zpool replace pool daX when the device comes up again This has previously resulted in zfs to resilver the replaced disk and everything was OK. We have switched those machines to 9 sometimes in the near past (r237433) and the above has changed. The disk disappears, gets physically replaced, reappears, and zpool replace says now that the disk is already part of the pool. I can even see a zfs signature on it with dd. After rebooting the machine, I can issue the zpool replace command without any problems, and zfs starts to rebuild its contents. (I have no dd-data from this state, sorry) Additional information which may be relevant: the drives are hooked up to Smart Array (ciss) controllers, and they are RAID 0 volumes (one logical drive per physical drive). I thought about a ciss firmware bug (caching the zfs metadata even after the disk has been replaced), but this is so weird, and should affect both FreeBSD 8 and 9. So from this I guess it's a FreeBSD bug, which I couldn't see on 8. Any ideas about what could cause this?
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?501AC9D6.6080409>