From owner-freebsd-questions@FreeBSD.ORG Thu Mar 4 23:58:21 2010 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DD8B7106566C for ; Thu, 4 Mar 2010 23:58:21 +0000 (UTC) (envelope-from emailrob@emailrob.com) Received: from mx04.dls.net (mx04.dls.net [216.145.245.200]) by mx1.freebsd.org (Postfix) with ESMTP id B0BAF8FC16 for ; Thu, 4 Mar 2010 23:58:21 +0000 (UTC) Received: from [216.145.235.87] (helo=emailrob.com) by mx04.dls.net with esmtp (Exim 4.69) (envelope-from ) id 1NnJpk-0000L0-Cg; Thu, 04 Mar 2010 16:47:57 -0600 Message-ID: <4B9038B0.9050401@emailrob.com> Date: Thu, 04 Mar 2010 22:48:16 +0000 From: spellberg_robert User-Agent: Mozilla/5.0 (Windows; U; Win98; en-US; rv:1.0.2) Gecko/20030208 Netscape/7.02 X-Accept-Language: en-us, en MIME-Version: 1.0 To: fbsd_questions Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Subject: [ fbsd_questions ] tar(1) vs. msdos_fs: a death_spiral ? X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Mar 2010 23:58:21 -0000 greetings, all --- i confess that this one has me flummoxed. the short question: does tar(1) spit_up when extracting onto an msdos_fs hard_drive ? [ i tried the mailing_list archives "tar AND msdos", for -questions, -chat, -bugs, -newbies, -performance ] [ other research as indicated ] i have no problem using tar(1) on ufs. large files, small files; if i am on ufs, everything is fine. i have been creating tarballs from medium_size msdos_fs drives, also. this worked fine. i would check them by extracting into a ufs root_point. no problem. this week, i tried to do something new. i wanted to take a tarball, already on ufs, that was created from an msdos_fs drive and extract it onto an msdos_fs drive. this, to me, actually seems like a reaasonable idea; but, what do i know ? well, it starts out just fine, but, it rapidly degenerates into what is, normally, infinite_loop land. when ps(1) says cpu_% of 1%, 2%, 5%; ok, it is an active process. in about ten minutes, tar(1) enters 90% cpu. after 20 minutes, 99%. i does not matter if X_windows is running. foreground or background process, no difference. it seems to be working correctly because the error_file is always of zero_size. i suspect that if i left it alone, after a few days, it would finish. some details [ everything is ufs, using 8kB/1kB, except "/mnt", which is clustered as indicated; of course, the tarball is not named "ball", nor is the path, to the tarball, named "path", but, then, you knew that ]. mkdir /path_c mkdir /path_c/88_x mkdir /path_d mkdir /path_d/88_x mount -v -t msdos /dev/ad1s1 /mnt [ fat_32, about 6_GB, 4_KB cluster, the "c:\" drive, primary partition. ] cd /mnt ( tar cvplf /path_c/99_ball.tar . > /path_c/90_cvpl.out ) > & /path_c/91_cvpl.err & [ real time 16m 07s, exit_status 0 ] cd / ; umount /mnt mount -v -t msdos /dev/ad1s5 /mnt [ fat_32, about 12_GB, 8_KB cluster, the "d:\" drive, extended partition. ] cd /mnt ( tar cvplf /path_d/99_ball.tar . > /path_d/90_cvpl.out ) > & /path_d/91_cvpl.err & [ real time 20m 15s, exit_status 0 ] cd / ; umount /mnt cd /path_c/88_x ( tar xvplf ../99_ball.tar > ../92_xvpl.out ) > & ../93_xvpl.err & [ real time 08m 11s; exit_status 0 ] diff ../9[02]* [ exit_status 0; the tables_of_contents are the same ] ls -l .. [ visually inspect the error_files to be of zero_size - verified ] cd /path_d/88_x ( tar xvplf ../99_ball.tar > ../92_xvpl.out ) > & ../93_xvpl.err & [ real time 12m 37s; exit_status 0 ] diff ../9[02]* [ exit_status 0; the tables_of_contents are the same ] ls -l .. [ visually inspect the error_files to be of zero_size - verified ] [ note that this approach works; it is a good excuse to refill my coffee_cup. ] [ physically replace the source hard_drive w/ 80_GB capacity, 32_KB cluster, primary_partition only, virgin hard_drive. this destination hard_drive was "fdisk"ed and "format"ed yesterday_morning; this drive was "scandisk"ed yesterday for 12 hours, using the "thorough" option, it has zero bad clusters [ i wanted to eliminate the drive as the problem ] ]. mount -v -t msdos /dev/ad1s1 /mnt mkdir /mnt/path_cc cd /mnt/path_cc ( tar xvplf /path_c/99_ball.tar > ../92_xvpl.out ) > & ../93_xvpl.err & [ started this at 18:05_utc, it is now about 21:35_utc; the toc_file, from the 8_minute extraction above, has 87517 lines in it; the current toc_file has only 12667 lines. ] [ this is the second hard_drive i have tried this on, this week; i will probably kill the process as xterm is being updated about 8 seconds apart, now. ] on the first hard_drive [ i have not done this on the second drive, yet ] i noted that i had a successful extraction on the ufs drive. not being the smartest person around, i had, what i thought to be, a --brilliant-- idea, "what if i try a recursive copy of the successful extraction" ? this is interesting; the recursive copy started_out like gang_busters, then, just like the extraction, slowly bogged_down to 99%_cpu. hmmm..., two different msdos_fs hard_drives, two different normally_reliable utilities, same progressive_hogging of the cpu. this makes me wonder about the msdos_fs hard_drive, which is, rapidly, becoming the only remaining common factor. ok. i tried the mailing lists. right now, i am web_page searching; tar(1) seems to be slow in some situations, but, i have not, yet, found --this-- situation. also, in reading the man_pages for mount(1) and tar(1), i am starting to wonder if this could be a tar(1) "block_size" issue. i am not doing any encryption or compression, in either direction. last check at about 22:45_utc: 99.0%_cpu, 0.1%_mem. does anyone have any thoughts ? please cc. rob