Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 28 Nov 2011 17:49:50 -0800
From:      allan@stokes.ca
To:        freebsd-current@freebsd.org
Subject:   upgrade issue 8.x to 9.0-RC2: libz.so.5 not found
Message-ID:  <0168ab7579589d8d866ce8ff93544f1f.squirrel@sm.webmail.pair.com>

next in thread | raw e-mail | index | archive | help

Hello everyone,

First a quick introduction, then my project, then my problem.

==My FreeBSD involvement==

I've been dabbling with FreeBSD since I set up stokes.ca at pair.com over
a decade ago.  I liked the service at Pair, so I installed FreeBSD at home
on a spare box.  One of those evil Fujitsu disk drives took out my stable
4.x test system before I had a complete set of backups (Fujitsu was early
into fluid dynamic bearings and I purchased on acoustics).  I won't pain
anyone by recalling the 5.x experience that followed.  I had a 6.x system
for a long time, but it never got much beyond a quasi offline backup
spool.  The 30GB system disk finally conked out--this was expected, so I
slapped a new system disk into a box that had previously been my
workstation, and performed a fresh 8 series installation.

Unfortunately, I was never able to mount my PATA backup drive because of
issues with the JMicron 363 PATA controller chip on the Asus P5B.  These
issues still exist in 9.x FWIW.  Last week I finally shut down my
firewall, stretched the drive cables across (to a different Core Duo
mainboard not afflicted with a JMicron PATA controller) and scarfed the
aging data.  I had the data elsewhere in bits and pieces so there was
never great urgency, the value was mainly that this was my only
_organized_ backup set.

==7 men EGTB==

I'm getting more serious about FreeBSD again because I'm becoming involved
over the next few years in a hobby project to compute the chess
seven-piece EGTB data set, which will total 60TB I think, when someday the
computation completes.  I'm mostly interested in compressed disk
representations which permit high access speed that can be used by chess
engines in real time.  I might do a few pieces of the retrograde
computation myself, but nowhere close to the whole thing.  To perform the
computation with full chess accuracy you need to begin with maximal
promotion: two kings and five queens, and then work down through crazy
promotions, such as two kings and five bishops, and then finally to
positions that could possibly someday occur in a real chess game: many
terabytes of computationally intense bootstrap to cover some very tiny
corner cases (how tiny is a question I'm interested to explore, but most
chess purists aren't).

Even a small piece of the project would generate copious data so I've
rigged up an experimental ZFS v28 box with a triple mirror on three 500GB
disks I had lying around (two slightly enterprisy, one a Seagate refurb). 
I think of it as a two-way mirror with a pre-silvered hot spare that's
better than nothing.  I put 6GB of memory into the box and tried out
deduplication.  It ran great.  I don't expect to use this feature (in
hobby production) any time soon.  I'll probably upgrade all the drives
after the waters recede in Thailand and run fairly basic parameters.

This week I'm intending to purchase a pair of Intel 311 SSD drives (20GB
SLC) as a mirrored ZIL or ZIL/ARC option (Newegg.ca has them for $110). 
Warn me soon if I'm doing something stupid!  I wish my ZFS box had
chipkill ECC, but that upgrade is out of my budget at present, since it
involves a whole new system board, CPU, and memory.  I'm going to have to
live a bit on the edge with other backups (and integrity checksums)
available.  I have some general interest in testing out what ZFS can do on
fully configured box.  I also do a lot of R programming and I'm
experimenting with HDF5 for large data sets.  The ZIL upgrade doesn't
pertain much to my EGTB project.

I'm either not brave enough or insane enough to put my FreeBSD system
volume onto the ZFS mirror, as much as that seems kind of cool.  Plain old
UFS on a separate drive for me.  Have others had success with ZFS system
volumes?

==Broken upgrade problem==

I was able to do a binary upgrade from 8.x to 9.0-RC2-p0.  Slick.  I once
skimmed Colin's thesis, but the math is heavy--I understood enough to
grasp that it's an excellent piece of work, and certainly much
appreciated.

After the binary upgrade my system runs well enough to initiate a ZFS pool
and survive some ZFS pounding over the network (20GB data set with a
recursive symlink deduplicated many times over until I finally killed it).

However, programs such as startx and portupgrade are failing with the
message "libz.so.5 not found".  I know I can fix this with an evil
symlink, but that doesn't seem right, and what else is broken?  Is there
not a facility in portupgrade to scan my live dependencies and warn me of
breakage?  I have not encountered such a beast in my gleanings to date.

On the first pass I skipped the package build step in my haste to break
everything.  That didn't work well (of course), so I rolled it back
(sweet) and followed instructions: including the Ruby preamble and the
triple Beetlejuice freebsd-update incantation.

If there's an easy way to fix the mess, I'll do so, but otherwise there's
no reason not to repair the problem with the blunt tool of a fresh system
installation.  In the past, I've been able to install FreeBSD using PXE
and a TFTP service on my firewall (mostly using em0 network cards).  I use
the PXE facility so rarely, it's a small struggle to recall the magic each
time.

Note that after my ports recompiled (flags -af), there were roughly 40
packages out of 470 in /var/db/pkg that still had the old date stamp. 
These could be unimportant remnants for all I know.  The list includes
xorg-7.5, apache-2.0.63_15, a bunch of qt4 and xfree stuff, and xz-4.999.9
which I don't think pertains to libz, but I could be wrong.

Finally, I also have some comments about ZFS as pertaining to idiots
running on commodity DRAM (I can't be the only one).  Which is the right
mailing list?

Basically my sentiment is that if you don't have hardware ECC (not even
available so far as I know for the SB-E platform ideal to the EGTB
computation and the Xeon equivalents are pricey), some software memory
scrubbing could be valuable, and obvious to implement for the in-memory
ARC cache (where cached data can sit exposed to cosmic rays for an
indefinite time period on a lightly loaded network)--if ZFS doesn't
incorporate that trick already.

Any suggestions are appreciated.

Allan




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?0168ab7579589d8d866ce8ff93544f1f.squirrel>