From owner-freebsd-geom@FreeBSD.ORG Sat Dec 13 14:30:22 2008 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CC2791065670 for ; Sat, 13 Dec 2008 14:30:22 +0000 (UTC) (envelope-from ulf.lilleengen@gmail.com) Received: from an-out-0708.google.com (an-out-0708.google.com [209.85.132.250]) by mx1.freebsd.org (Postfix) with ESMTP id 79F768FC16 for ; Sat, 13 Dec 2008 14:30:22 +0000 (UTC) (envelope-from ulf.lilleengen@gmail.com) Received: by an-out-0708.google.com with SMTP id c2so687674anc.13 for ; Sat, 13 Dec 2008 06:30:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type:references; bh=nVZyvGXrz6zfUhPLaMcB8YzTcJRzzWJ6SwlmyD+qn1Q=; b=L1UhLAGhw/FcHWFhXqR+h6ueQ/12ab3PJlQuiufgqPcM7I7OJcYnmX8XcDcbJ1+Os5 XYkd9RtitUVuHdrzCzwNkZjd/ulXnnBsslXJMn5sK2h+BpqRYmpl28mLiwHrZXzmBQOG IYiFP+dJvgTHdGvD92EoduQrdofvJ68S65iBM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:references; b=MDQgojSTyTGHKTQyxXhv3Wcm3W9lRjhnjzwp9s51poPq94GrWsjfzf9mo230Kn5HkK kfJGZLqcHwI6LardV/MAPqwTouLQhSU9I/4+OUVcA2FoNCNzIryd9hqzQtZGG43QysaQ MVtJtvsgx8Ihzce9gth7IR/Bl+hzg8/IuqfdE= Received: by 10.100.131.13 with SMTP id e13mr3555718and.57.1229176741252; Sat, 13 Dec 2008 05:59:01 -0800 (PST) Received: by 10.100.210.20 with HTTP; Sat, 13 Dec 2008 05:59:01 -0800 (PST) Message-ID: <917871cf0812130559r6d423688q57287dd765d6edf4@mail.gmail.com> Date: Sat, 13 Dec 2008 14:59:01 +0100 From: "Ulf Lilleengen" To: "Michael Jung" In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_11901_3716848.1229176741247" References: <20081212155023.GA82667@keira.kiwi-computer.com> X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-geom@freebsd.org Subject: Re: Encrypting raid5 volume with geli X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 13 Dec 2008 14:30:22 -0000 ------=_Part_11901_3716848.1229176741247 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline On Fri, Dec 12, 2008 at 5:00 PM, Michael Jung wrote: > FreeBSD charon.confluentasp.com 7.1-PRERELEASE FreeBSD 7.1-PRERELEASE > #2: Thu Sep 4 12:06:08 EDT 2008 > > In the interest of this thread I tried to duplicate the problem. I > created: > > 10 drives: > D d9 State: up /dev/da9 A: 0/17366 MB > (0%) > D d8 State: up /dev/da8 A: 0/17366 MB > (0%) > D d7 State: up /dev/da7 A: 0/17366 MB > (0%) > D d6 State: up /dev/da6 A: 0/17366 MB > (0%) > D d5 State: up /dev/da5 A: 0/17366 MB > (0%) > D d4 State: up /dev/da4 A: 0/17366 MB > (0%) > D d3 State: up /dev/da3 A: 0/17366 MB > (0%) > D d2 State: up /dev/da2 A: 0/17366 MB > (0%) > D d1 State: up /dev/da1 A: 0/17366 MB > (0%) > D d0 State: up /dev/da0 A: 0/17366 MB > (0%) > > 1 volume: > V test State: up Plexes: 1 Size: 152 > GB > > 1 plex: > P test.p0 R5 State: up Subdisks: 10 Size: 152 > GB > > 10 subdisks: > S test.p0.s9 State: up D: d9 Size: 16 > GB > S test.p0.s8 State: up D: d8 Size: 16 > GB > S test.p0.s7 State: up D: d7 Size: 16 > GB > S test.p0.s6 State: up D: d6 Size: 16 > GB > S test.p0.s5 State: up D: d5 Size: 16 > GB > S test.p0.s4 State: up D: d4 Size: 16 > GB > S test.p0.s3 State: up D: d3 Size: 16 > GB > S test.p0.s2 State: up D: d2 Size: 16 > GB > S test.p0.s1 State: up D: d1 Size: 16 > GB > S test.p0.s0 State: up D: d0 Size: 16 > GB > > Which I can newfs and mount > > (root@charon) /etc# mount /dev/gvinum/test /mnt > (root@charon) /etc# df -h > Filesystem Size Used Avail Capacity Mounted on > /dev/ad4s1a 357G 119G 209G 36% / > devfs 1.0K 1.0K 0B 100% /dev > 172.0.255.28:/data/unix 1.3T 643G 559G 54% /nas1 > /dev/gvinum/test 148G 4.0K 136G 0% /mnt > > But with /dev/gvinum/test unmounted if I try: > > (root@charon) /etc# geli init -P -K /root/test.key /dev/gvinum/test > geli: Cannot store metadata on /dev/gvinum/test: Operation not > permitted. > (root@charon) /etc# > > My random file was created like > > dd if=/dev/random of=/root/test.key bs=64 count=1 > > I use GELI at home with no trouble, although not with a gvinum volume. > Hello, When I tried this myself, I also got the EPERM error in return. I though this was very strange. I went through the gvinum code today, and put debugging prints everywhere, but everything looked fine, and it was only raid5 volumes that failed. Then I saw that the EPERM error came from the underlying providers of geom (more specifially from the read requests to the parity stripes etc), so I was starting to suspect that it was not a gvinum error. But still, I was able to write/read from the disks from outside of gvinum! Then, I discovered in geom userland code that it opens the disk where metadata should be written in write only mode. Then I discovered the reason: gvinum tries to write to the stripe in question, but has to read back the parity data from one of the other stripes. But, they are opened O_WRONLY, so the request fails. I tried opening the device as O_RDWR, and everything is find. Phew :) You can bet I was frustrated I hope to commit the attached change in the near future. -- Ulf Lilleengen ------=_Part_11901_3716848.1229176741247 Content-Type: application/octet-stream; name=geomfix.diff Content-Transfer-Encoding: base64 X-Attachment-Id: f_fooe7yb30 Content-Disposition: attachment; filename=geomfix.diff SW5kZXg6IHN1YnIuYwo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSBzdWJyLmMJKHJldmlzaW9uIDE4NTkzMCkKKysr IHN1YnIuYwkod29ya2luZyBjb3B5KQpAQCAtMjExLDcgKzIxMSw3IEBACiAJc2VjdG9yID0gTlVM TDsKIAllcnJvciA9IDA7CiAKLQlmZCA9IG9wZW4ocGF0aCwgT19XUk9OTFkpOworCWZkID0gb3Bl bihwYXRoLCBPX1JEV1IpOwogCWlmIChmZCA9PSAtMSkKIAkJcmV0dXJuIChlcnJubyk7CiAJbWVk aWFzaXplID0gZ19nZXRfbWVkaWFzaXplKG5hbWUpOwo= ------=_Part_11901_3716848.1229176741247--