Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 28 Nov 2024 14:58:29 +0100 (CET)
From:      Ronald Klop <ronald-lists@klop.ws>
To:        Ronald Klop <ronald-lists@klop.ws>
Cc:        Current FreeBSD <freebsd-current@freebsd.org>, Dennis Clarke <dclarke@blastwave.org>
Subject:   Re: zpools no longer exist after boot
Message-ID:  <1764191396.6959.1732802309600@localhost>
In-Reply-To: <1784014555.6851.1732801799724@localhost>

next in thread | previous in thread | raw e-mail | index | archive | help
------=_Part_6958_1940271980.1732802309596
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Btw:

The /etc/rc.d/zpool script looks into these cachefiles:

for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do

I didn=E2=80=99t check where the cachefile pool property is used.=20

Hope this helps resolving the issue. Or maybe helps you to provide more inf=
ormation about your setup.=20

Regards,
Ronald.=20

Van: Ronald Klop <ronald-lists@klop.ws>
Datum: 28 november 2024 14:50
Aan: Dennis Clarke <dclarke@blastwave.org>
CC: Current FreeBSD <freebsd-current@freebsd.org>
Onderwerp: Re: zpools no longer exist after boot

>=20
>=20
>=20
> Are the other disks available at the moment the boot process does zpool i=
mport?
>=20
> Regards,
> Ronald
>=20
> Van: Dennis Clarke=20
> Datum: 28 november 2024 14:06
> Aan: Current FreeBSD=20
> Onderwerp: zpools no longer exist after boot
>=20
>>=20
>> This is a baffling problem wherein two zpools no longer exist after
>> boot. This is :
>>=20
>> titan# uname -apKU
>> FreeBSD titan 15.0-CURRENT FreeBSD 15.0-CURRENT #1 main-n273749-4b65481a=
c68a-dirty: Wed Nov 20 15:08:52 GMT 2024 root@titan:/usr/obj/usr/src/amd64.=
amd64/sys/GENERIC-NODEBUG amd64 amd64 1500027 1500027
>> titan#
>>=20
>> titan# zpool list
>> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP HEALTH=
  ALTROOT
>> t0     444G  91.2G   353G        -         -    27%    20%  1.00x ONLINE=
  -
>> titan#
>>=20
>> The *only* zpool that seems to exist in any reliable way is the little
>> NVME based unit for booting. The other two zpools vanished and yet the
>> devices exist just fine :
>>=20
>> titan#
>> titan# camcontrol devlist
>>         at scbus0 target 0 lun 0 (pass0,ada0)
>>         at scbus1 target 0 lun 0 (pass1,ada1)
>>    at scbus2 target 0 lun 0 (ses0,pass2)
>>    at scbus6 target 0 lun 0 (ses1,pass3)
>>   at scbus7 target 0 lun 1 (pass4,nda0)
>>              at scbus8 target 0 lun 0 (da0,pass5)
>> titan#
>> titan# nvmecontrol devlist
>>   nvme0: SAMSUNG MZVKW512HMJP-000L7
>>      nvme0ns1 (488386MB)
>> titan#
>> titan# zpool status t0
>>    pool: t0
>>   state: ONLINE
>> status: Some supported and requested features are not enabled on the poo=
l.
>>          The pool can still be used, but some features are unavailable.
>> action: Enable all features using 'zpool upgrade'. Once this is done,
>>          the pool may no longer be accessible by software that does not =
support
>>          the features. See zpool-features(7) for details.
>>    scan: scrub repaired 0B in 00:00:44 with 0 errors on Wed Feb  7 09:56=
:40 2024
>> config:
>>=20
>>          NAME        STATE     READ WRITE CKSUM
>>          t0          ONLINE       0     0     0
>>            nda0p3    ONLINE       0     0     0
>>=20
>> errors: No known data errors
>> titan#
>>=20
>>=20
>> Initially I thought the problem was related to cachefile being empty for
>> these zpools. However if I set the cachefile to something reasonable
>> then the cachefile property vanishes at a reboot.  The file, of course, =
exists just fine :
>>=20
>> titan# zpool get cachefile proteus
>> NAME     PROPERTY   VALUE      SOURCE
>> proteus  cachefile  -          default
>> titan#
>> titan# zpool set cachefile=3D"/var/log/zpool_cache" proteus
>> titan# zpool get cachefile proteus
>> NAME     PROPERTY   VALUE                 SOURCE
>> proteus  cachefile  /var/log/zpool_cache  local
>> titan# ls -ladb /var/log/zpool_cache
>> -rw-r--r--  1 root wheel 1440 Nov 28 11:45 /var/log/zpool_cache
>> titan#
>>=20
>> So there we have 1440 bytes of data in that file.
>>=20
>> titan# zpool set cachefile=3D"/var/log/zpool_cache" t0
>> titan# zpool get cachefile t0
>> NAME  PROPERTY   VALUE                 SOURCE
>> t0    cachefile  /var/log/zpool_cache  local
>> titan#
>> titan# ls -ladb /var/log/zpool_cache
>> -rw-r--r--  1 root wheel 2880 Nov 28 11:46 /var/log/zpool_cache
>> titan#
>>=20
>> Now we have 2 * 1440 bytes =3D 2880 bytes of some zpool cache data.
>>=20
>> titan# zpool set cachefile=3D"/var/log/zpool_cache" leaf
>> titan# zpool get cachefile leaf
>> NAME  PROPERTY   VALUE                 SOURCE
>> leaf  cachefile  /var/log/zpool_cache  local
>> titan#
>> titan# zpool get cachefile t0
>> NAME  PROPERTY   VALUE                 SOURCE
>> t0    cachefile  /var/log/zpool_cache  local
>> titan#
>> titan# zpool get cachefile proteus
>> NAME     PROPERTY   VALUE                 SOURCE
>> proteus  cachefile  /var/log/zpool_cache  local
>> titan#
>> titan# reboot
>>=20
>>  From here on ... the only zpool that exists after boot is the local
>> little NVME samsung unit.
>>=20
>> So here I can import those pools and then see that the cachefile propert=
y has been wiped out :
>>=20
>> titan#
>> titan# zpool import proteus
>> titan# zpool import leaf
>> titan#
>> titan# zpool list
>> NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP HEA=
LTH  ALTROOT
>> leaf     18.2T   984K  18.2T        -         -     0%     0%  1.00x ONL=
INE  -
>> proteus  1.98T   361G  1.63T        -         -     1%    17%  1.00x ONL=
INE  -
>> t0        444G  91.2G   353G        -         -    27%    20%  1.00x ONL=
INE  -
>> titan#
>> titan# zpool get cachefile leaf
>> NAME  PROPERTY   VALUE      SOURCE
>> leaf  cachefile  -          default
>> titan#
>> titan# zpool get cachefile proteus
>> NAME     PROPERTY   VALUE      SOURCE
>> proteus  cachefile  -          default
>> titan#
>> titan# zpool get cachefile t0
>> NAME  PROPERTY   VALUE      SOURCE
>> t0    cachefile  -          default
>> titan#
>> titan# ls -l /var/log/zpool_cache
>> -rw-r--r--  1 root wheel 4960 Nov 28 11:52 /var/log/zpool_cache
>> titan#
>>=20
>> The cachefile exists and seems to have grown in size.
>>=20
>> However a reboot will once again provide nothing but the t0 pool.
>>=20
>> Baffled.
>>=20
>> Any thoughts would be welcome.
>>=20
>> --=20
>> --
>> Dennis Clarke
>> RISC-V/SPARC/PPC/ARM/CISC
>> UNIX and Linux spoken
>>=20
>>=20
>>=20
>>=20
>>=20
>=20
>=20
>=20
>=20
>=20
>=20
------=_Part_6958_1940271980.1732802309596
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><head></head><body>Btw:<div><br></div><div>The /etc/rc.d/zpool script=
 looks into these cachefiles:</div><div><br></div><div>for cachefile in /et=
c/zfs/zpool.cache /boot/zfs/zpool.cache; do</div><div><br></div><div>I didn=
=E2=80=99t check where the cachefile pool property is used.&nbsp;</div><div=
><br></div><div>Hope this helps resolving the issue. Or maybe helps you to =
provide more information about your setup.&nbsp;</div><div><br></div><div>R=
egards,</div><div>Ronald.&nbsp;</div><div><br><p><small><strong>Van:</stron=
g> Ronald Klop &lt;ronald-lists@klop.ws&gt;<br><strong>Datum:</strong> 28 n=
ovember 2024 14:50<br><strong>Aan:</strong> Dennis Clarke &lt;dclarke@blast=
wave.org&gt;<br><strong>CC:</strong> Current FreeBSD &lt;freebsd-current@fr=
eebsd.org&gt;<br><strong>Onderwerp:</strong> Re: zpools no longer exist aft=
er boot<br></small></p><blockquote style=3D"margin-left: 5px; border-left: =
3px solid #ccc; margin-right: 0px; padding-left: 5px;"><div class=3D"Messag=
eRFC822Viewer do_not_remove" id=3D"P"><!-- P -->
<!-- processMimeMessage --><div class=3D"MultipartAlternativeViewer do_not_=
remove"><!-- P.P -->
<div class=3D"TextHTMLViewer do_not_remove" id=3D"P.P.P"><!-- P.P.P -->
Are the other disks available at the moment the boot process does zpool imp=
ort?<div class=3D"do_not_remove"><br></div><div class=3D"do_not_remove">Reg=
ards,</div><div class=3D"do_not_remove">Ronald</div><div class=3D"do_not_re=
move"><br><p><small><strong>Van:</strong> Dennis Clarke <dclarke@blastwave.=
org><br><strong>Datum:</strong> 28 november 2024 14:06<br><strong>Aan:</str=
ong> Current FreeBSD <freebsd-current@freebsd.org><br><strong>Onderwerp:</s=
trong> zpools no longer exist after boot<br></freebsd-current@freebsd.org><=
/dclarke@blastwave.org></small></p><blockquote style=3D"margin-left: 5px; b=
order-left: 3px solid #ccc; margin-right: 0px; padding-left: 5px;"><div cla=
ss=3D"MessageRFC822Viewer do_not_remove" id=3D"P">
<div class=3D"TextPlainViewer do_not_remove" id=3D"P.P"><br>
This is a baffling problem wherein two zpools no longer exist after<br>
boot. This is :<br>
<br>
titan# uname -apKU<br>
FreeBSD titan 15.0-CURRENT FreeBSD 15.0-CURRENT #1 main-n273749-4b65481ac68=
a-dirty: Wed Nov 20 15:08:52 GMT 2024 root@titan:/usr/obj/usr/src/amd64.amd=
64/sys/GENERIC-NODEBUG amd64 amd64 1500027 1500027<br>
titan#<br>
<br>
titan# zpool list<br>
NAME &nbsp;&nbsp;SIZE &nbsp;ALLOC &nbsp;&nbsp;FREE &nbsp;CKPOINT &nbsp;EXPA=
NDSZ &nbsp;&nbsp;FRAG &nbsp;&nbsp;&nbsp;CAP &nbsp;DEDUP HEALTH &nbsp;ALTROO=
T<br>
t0 &nbsp;&nbsp;&nbsp;&nbsp;444G &nbsp;91.2G &nbsp;&nbsp;353G &nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;- &nbsp;&nbsp;&nbsp;27% &nbsp;&nbsp;&nbsp;20% &nbsp;1.00x ONLINE &nbsp;-=
<br>
titan#<br>
<br>
The *only* zpool that seems to exist in any reliable way is the little<br>
NVME based unit for booting. The other two zpools vanished and yet the<br>
devices exist just fine :<br>
<br>
titan#<br>
titan# camcontrol devlist<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at scbus0 target 0 lun 0 (pass0,=
ada0)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at scbus1 target 0 lun 0 (pass1,=
ada1)<br>
 &nbsp;&nbsp;at scbus2 target 0 lun 0 (ses0,pass2)<br>
 &nbsp;&nbsp;at scbus6 target 0 lun 0 (ses1,pass3)<br>
 &nbsp;at scbus7 target 0 lun 1 (pass4,nda0)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at=
 scbus8 target 0 lun 0 (da0,pass5)<br>
titan#<br>
titan# nvmecontrol devlist<br>
&nbsp;&nbsp;nvme0: SAMSUNG MZVKW512HMJP-000L7<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;nvme0ns1 (488386MB)<br>
titan#<br>
titan# zpool status t0<br>
&nbsp;&nbsp;&nbsp;pool: t0<br>
&nbsp;&nbsp;state: ONLINE<br>
status: Some supported and requested features are not enabled on the pool.<=
br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The pool can still be=
 used, but some features are unavailable.<br>
action: Enable all features using 'zpool upgrade'. Once this is done,<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;the pool may no longe=
r be accessible by software that does not support<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;the features. See zpo=
ol-features(7) for details.<br>
&nbsp;&nbsp;&nbsp;scan: scrub repaired 0B in 00:00:44 with 0 errors on Wed =
Feb &nbsp;7 09:56:40 2024<br>
config:<br>
<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;NAME &nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;STATE &nbsp;&nbsp;&nbsp;&nbsp;READ WRITE CKSUM<br=
>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;t0 &nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ONLINE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;nda0p3 &n=
bsp;&nbsp;&nbsp;ONLINE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&n=
bsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0<br>
<br>
errors: No known data errors<br>
titan#<br>
<br>
<br>
Initially I thought the problem was related to cachefile being empty for<br=
>
these zpools. However if I set the cachefile to something reasonable<br>
then the cachefile property vanishes at a reboot. &nbsp;The file, of course=
, exists just fine :<br>
<br>
titan# zpool get cachefile proteus<br>
NAME &nbsp;&nbsp;&nbsp;&nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;SOURCE<br>
proteus &nbsp;cachefile &nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;default<br>
titan#<br>
titan# zpool set cachefile=3D"/var/log/zpool_cache" proteus<br>
titan# zpool get cachefile proteus<br>
NAME &nbsp;&nbsp;&nbsp;&nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;SOURCE<br>
proteus &nbsp;cachefile &nbsp;/var/log/zpool_cache &nbsp;local<br>
titan# ls -ladb /var/log/zpool_cache<br>
-rw-r--r-- &nbsp;1 root wheel 1440 Nov 28 11:45 /var/log/zpool_cache<br>
titan#<br>
<br>
So there we have 1440 bytes of data in that file.<br>
<br>
titan# zpool set cachefile=3D"/var/log/zpool_cache" t0<br>
titan# zpool get cachefile t0<br>
NAME &nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOURCE<br>
t0 &nbsp;&nbsp;&nbsp;cachefile &nbsp;/var/log/zpool_cache &nbsp;local<br>
titan#<br>
titan# ls -ladb /var/log/zpool_cache<br>
-rw-r--r-- &nbsp;1 root wheel 2880 Nov 28 11:46 /var/log/zpool_cache<br>
titan#<br>
<br>
Now we have 2 * 1440 bytes =3D 2880 bytes of some zpool cache data.<br>
<br>
titan# zpool set cachefile=3D"/var/log/zpool_cache" leaf<br>
titan# zpool get cachefile leaf<br>
NAME &nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOURCE<br>
leaf &nbsp;cachefile &nbsp;/var/log/zpool_cache &nbsp;local<br>
titan#<br>
titan# zpool get cachefile t0<br>
NAME &nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOURCE<br>
t0 &nbsp;&nbsp;&nbsp;cachefile &nbsp;/var/log/zpool_cache &nbsp;local<br>
titan#<br>
titan# zpool get cachefile proteus<br>
NAME &nbsp;&nbsp;&nbsp;&nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;SOURCE<br>
proteus &nbsp;cachefile &nbsp;/var/log/zpool_cache &nbsp;local<br>
titan#<br>
titan# reboot<br>
<br>
&nbsp;From here on ... the only zpool that exists after boot is the local<b=
r>
little NVME samsung unit.<br>
<br>
So here I can import those pools and then see that the cachefile property h=
as been wiped out :<br>
<br>
titan#<br>
titan# zpool import proteus<br>
titan# zpool import leaf<br>
titan#<br>
titan# zpool list<br>
NAME &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SIZE &nbsp;ALLOC &nbsp;&nbsp;FREE &nbsp;=
CKPOINT &nbsp;EXPANDSZ &nbsp;&nbsp;FRAG &nbsp;&nbsp;&nbsp;CAP &nbsp;DEDUP H=
EALTH &nbsp;ALTROOT<br>
leaf &nbsp;&nbsp;&nbsp;&nbsp;18.2T &nbsp;&nbsp;984K &nbsp;18.2T &nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;0% &nbsp;&nbsp;&nbsp;&nbsp;0% &nbsp;1.00x O=
NLINE &nbsp;-<br>
proteus &nbsp;1.98T &nbsp;&nbsp;361G &nbsp;1.63T &nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&=
nbsp;&nbsp;&nbsp;1% &nbsp;&nbsp;&nbsp;17% &nbsp;1.00x ONLINE &nbsp;-<br>
t0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;444G &nbsp;91.2G &nbsp;&nbsp;3=
53G &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;27% &nbsp;&nbsp;&nbsp;20% &nbsp;1.=
00x ONLINE &nbsp;-<br>
titan#<br>
titan# zpool get cachefile leaf<br>
NAME &nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOURCE<=
br>
leaf &nbsp;cachefile &nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;default<br>
titan#<br>
titan# zpool get cachefile proteus<br>
NAME &nbsp;&nbsp;&nbsp;&nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;SOURCE<br>
proteus &nbsp;cachefile &nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;default<br>
titan#<br>
titan# zpool get cachefile t0<br>
NAME &nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOURCE<=
br>
t0 &nbsp;&nbsp;&nbsp;cachefile &nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;default<br>
titan#<br>
titan# ls -l /var/log/zpool_cache<br>
-rw-r--r-- &nbsp;1 root wheel 4960 Nov 28 11:52 /var/log/zpool_cache<br>
titan#<br>
<br>
The cachefile exists and seems to have grown in size.<br>
<br>
However a reboot will once again provide nothing but the t0 pool.<br>
<br>
Baffled.<br>
<br>
Any thoughts would be welcome.<br>
<br>
--&nbsp;<br>
--<br>
Dennis Clarke<br>
RISC-V/SPARC/PPC/ARM/CISC<br>
UNIX and Linux spoken<br>
<br>
</div>
<hr>
</div>
</blockquote><br><br><br></div></div><!-- TextHTMLViewer -->
</div><!-- MultipartAlternativeViewer -->
</div><!-- MessageRFC822Viewer -->
</blockquote><br><br><br></div></body></html>
------=_Part_6958_1940271980.1732802309596--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1764191396.6959.1732802309600>