Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 21 Apr 2023 12:18:45 +0200
From:      Mario Marietto <marietto2008@gmail.com>
To:        Aryeh Friedman <aryeh.friedman@gmail.com>
Cc:        FreeBSD Mailing List <freebsd-hackers@freebsd.org>,  FreeBSD Mailing List <freebsd-questions@freebsd.org>, Yuri Victorovich <yuri@freebsd.org>
Subject:   Re: Installing openAI's GPT-2 Ada AI Language Model
Message-ID:  <CA%2B1FSijsSSpCFeKeaOt4gR36BAZ4J8j4QSJRJa-VF-a=J9e2uw@mail.gmail.com>
In-Reply-To: <CAGBxaXkhC--ZppimDFabEwPhesjAJmrziNZm753eoyjy1sWzqg@mail.gmail.com>
References:  <CAGBxaXmhRLk9Lx_ZHeRdoN-K2fRLEhY3cBVtBymmAjd4bBh1OQ@mail.gmail.com> <CA%2B1FSihQ-f4uhiOjYH8Wo=AxFEkAKe3NRDJdopgT50J=_jY4fA@mail.gmail.com> <CAGBxaXnYojzQJqO62hkzUJvD2rzaNp%2Bem38FgCqVSBu%2BmkBi9A@mail.gmail.com> <CA%2B1FSijpiko%2B%2B%2BwJuXo2GVV6sz3yGVi7ig0X3037%2B1zE3n91hg@mail.gmail.com> <CAGBxaX=OcaHEZk3S7jQeYW64A_iRNTmJ%2Bab4U7h_hsrG%2BQqQPg@mail.gmail.com> <ZEEnZjzDCtR_ZG4P@graf.pompo.net> <CAGBxaXmU=Ja9EkoMyxQ0cNxYB4BeiktqQ3P64QcWg%2B=xijTiyQ@mail.gmail.com> <CA%2B1FSii6OOwi%2B%2Bau-_9ViU_SMZ%2BGbESG5H0McVTHQUwmMnOJGQ@mail.gmail.com> <CAGBxaXkhC--ZppimDFabEwPhesjAJmrziNZm753eoyjy1sWzqg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
--0000000000009e07eb05f9d5fdf2
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Can't you install pytorch using the linux miniconda installer like below ?

# fetch https://gist.githubusercontent.com/shkhln/40ef290463e78fb2b0000c60f=
4ad797e/raw/f640983249607e38af405c95c457ce4afc85c608/uvm_ioctl_override.c

# /compat/ubuntu/bin/gcc --sysroot=3D/compat/ubuntu -m64 -std=3Dc99 -Wall
-ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c

# pkg install linux-miniconda-installer
# miniconda-installer
# bash
# source /home/marietto/miniconda3/etc/profile.d/conda.sh
# conda activate

(base) # conda activate pytorch


On Fri, Apr 21, 2023 at 2:38=E2=80=AFAM Aryeh Friedman <aryeh.friedman@gmai=
l.com>
wrote:

> On Thu, Apr 20, 2023 at 12:24=E2=80=AFPM Mario Marietto <marietto2008@gma=
il.com>
> wrote:
> >
> > try to copy and paste the commands that you have issued on pastebin...i
> need to understand the scenario
>
> After saving the patch from the bug report to PORT/files and running
> portmaster -P misc/pytourch (brand new machine except for  installing
> portmaster):
>
> c/ATen/UfuncCPUKernel_add.cpp.AVX2.cpp.o -c
>
> /usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp.=
AVX2.cpp
> In file included from
>
> /usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp.=
AVX2.cpp:1:
> In file included from
> /usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp:=
3:
> In file included from
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/native/ufunc/a=
dd.h:6:
> In file included from
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/functi=
onal.h:3:
> In file included from
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/functi=
onal_base.h:6:
> In file included from
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec.h:=
6:
> In file included from
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256.h:12:
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:253:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_acosf8_u10);
>                ^~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:256:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_asinf8_u10);
>                ^~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:259:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_atanf8_u10);
>                ^~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:280:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_erff8_u10);
>                ^~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:283:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_erfcf8_u15);
>                ^~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:300:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_expf8_u10);
>                ^~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:303:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_expm1f8_u10);
>                ^~~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:393:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_logf8_u10);
>                ^~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:396:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_log2f8_u10);
>                ^~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:399:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_log10f8_u10);
>                ^~~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:402:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_log1pf8_u10);
>                ^~~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:406:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_sinf8_u10);
>                ^~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:409:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_sinhf8_u10);
>                ^~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:412:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_cosf8_u10);
>                ^~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:415:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_coshf8_u10);
>                ^~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:447:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_tanf8_u10);
>                ^~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:450:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_tanhf8_u10);
>                ^~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:460:16:
> error: cannot initialize a parameter of type 'const __m256
> (*)(__m256)' with an lvalue of type '__m256 (__m256)': different
> return type ('const __m256' (vector of 8 'float' values) vs '__m256'
> (vector of 8 'float' values))
>     return map(Sleef_lgammaf8_u10);
>                ^~~~~~~~~~~~~~~~~~
>
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256=
/vec256_bfloat16.h:209:49:
> note: passing argument to parameter 'vop' here
>   Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
>                                                 ^
> 18 errors generated.
> [ 80% 1035/1283] /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS
> -DCPUINFO_SUPPORTED_PLATFORM=3D0 -DFMT_HEADER_ONLY=3D1
> -DHAVE_MALLOC_USABLE_SIZE=3D1 -DHAVE_MMAP=3D1 -DHAVE_SHM_OPEN=3D1
> -DHAVE_SHM_UNLINK=3D1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
> -DONNXIFI_ENABLE_EXT=3D1 -DONNX_ML=3D1 -DONNX_NAMESPACE=3Donnx
> -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=3D64 -Dtorch_cpu_EXPORTS
> -I/usr/ports/misc/pytorch/work/.build/aten/src
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src
> -I/usr/ports/misc/pytorch/work/.build
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/foxi
> -I/usr/ports/misc/pytorch/work/.build/third_party/foxi
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/include
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2/aten/src/TH
> -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src/TH
> -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src
> -I/usr/ports/misc/pytorch/work/.build/caffe2/../aten/src
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/miniz-2.1.0
>
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkine=
to/include
>
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkine=
to/src
>
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/../third_party/catch/=
single_include
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/..
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/c10/..
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/cpuinfo/includ=
e
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/FP16/include
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/fmt/include
>
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/flatbuffers/in=
clude
> -isystem
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/cmake/../third_party/eigen
> -isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2 -O2 -pipe
> -fstack-protector-strong -isystem /usr/local/include
> -fno-strict-aliasing  -isystem /usr/local/include -Wno-deprecated
> -fvisibility-inlines-hidden -fopenmp=3Dlibomp -DNDEBUG -DUSE_KINETO
> -DLIBKINETO_NOCUPTI -DSYMBOLICATE_MOBILE_DEBUG_HANDLE
> -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra
> -Werror=3Dreturn-type -Werror=3Dnon-virtual-dtor
> -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds
> -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter
> -Wno-unused-function -Wno-unused-result -Wno-strict-overflow
> -Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations
> -Wvla-extension -Wno-range-loop-analysis -Wno-pass-failed
> -Wno-error=3Dpedantic -Wno-error=3Dredundant-decls
> -Wno-error=3Dold-style-cast -Wconstant-conversion
> -Wno-invalid-partial-specialization -Wno-typedef-redefinition
> -Wno-unused-private-field -Wno-inconsistent-missing-override
> -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces
> -Wunused-lambda-capture -Wunused-local-typedef -Qunused-arguments
> -fcolor-diagnostics -fdiagnostics-color=3Dalways
> -Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math
> -Werror=3Dformat -Werror=3Dcast-function-type -DHAVE_AVX512_CPU_DEFINITIO=
N
> -DHAVE_AVX2_CPU_DEFINITION -O2 -pipe -fstack-protector-strong -isystem
> /usr/local/include -fno-strict-aliasing  -isystem /usr/local/include
> -DNDEBUG -DNDEBUG -std=3Dgnu++14 -fPIC -DTH_HAVE_THREAD -Wall -Wextra
> -Wno-unused-parameter -Wno-unused-function -Wno-unused-result
> -Wno-missing-field-initializers -Wno-write-strings
> -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds
> -Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing
> -Wno-error=3Ddeprecated-declarations -Wno-missing-braces
> -Wno-range-loop-analysis -fvisibility=3Dhidden -O2 -fopenmp=3Dlibomp
> -DCAFFE2_BUILD_MAIN_LIB -pthread -MD -MT
>
> caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-arc=
hive.cpp.o
> -MF
> caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-arc=
hive.cpp.o.d
> -o
> caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-arc=
hive.cpp.o
> -c
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/src/serialize=
/input-archive.cpp
> [ 80% 1035/1283] /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS
> -DCPUINFO_SUPPORTED_PLATFORM=3D0 -DFMT_HEADER_ONLY=3D1
> -DHAVE_MALLOC_USABLE_SIZE=3D1 -DHAVE_MMAP=3D1 -DHAVE_SHM_OPEN=3D1
> -DHAVE_SHM_UNLINK=3D1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
> -DONNXIFI_ENABLE_EXT=3D1 -DONNX_ML=3D1 -DONNX_NAMESPACE=3Donnx
> -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=3D64 -Dtorch_cpu_EXPORTS
> -I/usr/ports/misc/pytorch/work/.build/aten/src
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src
> -I/usr/ports/misc/pytorch/work/.build
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/foxi
> -I/usr/ports/misc/pytorch/work/.build/third_party/foxi
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/include
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2/aten/src/TH
> -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src/TH
> -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src
> -I/usr/ports/misc/pytorch/work/.build/caffe2/../aten/src
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/miniz-2.1.0
>
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkine=
to/include
>
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkine=
to/src
>
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/../third_party/catch/=
single_include
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/..
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/c10/..
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/cpuinfo/includ=
e
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/FP16/include
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/fmt/include
>
> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/flatbuffers/in=
clude
> -isystem
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/cmake/../third_party/eigen
> -isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2 -O2 -pipe
> -fstack-protector-strong -isystem /usr/local/include
> -fno-strict-aliasing  -isystem /usr/local/include -Wno-deprecated
> -fvisibility-inlines-hidden -fopenmp=3Dlibomp -DNDEBUG -DUSE_KINETO
> -DLIBKINETO_NOCUPTI -DSYMBOLICATE_MOBILE_DEBUG_HANDLE
> -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra
> -Werror=3Dreturn-type -Werror=3Dnon-virtual-dtor
> -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds
> -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter
> -Wno-unused-function -Wno-unused-result -Wno-strict-overflow
> -Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations
> -Wvla-extension -Wno-range-loop-analysis -Wno-pass-failed
> -Wno-error=3Dpedantic -Wno-error=3Dredundant-decls
> -Wno-error=3Dold-style-cast -Wconstant-conversion
> -Wno-invalid-partial-specialization -Wno-typedef-redefinition
> -Wno-unused-private-field -Wno-inconsistent-missing-override
> -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces
> -Wunused-lambda-capture -Wunused-local-typedef -Qunused-arguments
> -fcolor-diagnostics -fdiagnostics-color=3Dalways
> -Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math
> -Werror=3Dformat -Werror=3Dcast-function-type -DHAVE_AVX512_CPU_DEFINITIO=
N
> -DHAVE_AVX2_CPU_DEFINITION -O2 -pipe -fstack-protector-strong -isystem
> /usr/local/include -fno-strict-aliasing  -isystem /usr/local/include
> -DNDEBUG -DNDEBUG -std=3Dgnu++14 -fPIC -DTH_HAVE_THREAD -Wall -Wextra
> -Wno-unused-parameter -Wno-unused-function -Wno-unused-result
> -Wno-missing-field-initializers -Wno-write-strings
> -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds
> -Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing
> -Wno-error=3Ddeprecated-declarations -Wno-missing-braces
> -Wno-range-loop-analysis -fvisibility=3Dhidden -O2 -fopenmp=3Dlibomp
> -DCAFFE2_BUILD_MAIN_LIB -pthread -MD -MT
>
> caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-ar=
chive.cpp.o
> -MF
> caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-ar=
chive.cpp.o.d
> -o
> caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-ar=
chive.cpp.o
> -c
> /usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/src/serialize=
/output-archive.cpp
> ninja: build stopped: subcommand failed.
> =3D=3D=3D> Compilation failed unexpectedly.
> Try to set MAKE_JOBS_UNSAFE=3Dyes and rebuild before reporting the failur=
e to
> the maintainer.
> *** Error code 1
>
> Stop.
> make: stopped in /usr/ports/misc/pytorch
>
> >
> > Il gio 20 apr 2023, 17:51 Aryeh Friedman <aryeh.friedman@gmail.com> ha
> scritto:
> >>
> >> On Thu, Apr 20, 2023 at 7:52=E2=80=AFAM Thierry Thomas <thierry@freebs=
d.org>
> wrote:
> >> >
> >> > Le jeu. 20 avr. 23 =C3=A0 12:53:05 +0200, Aryeh Friedman <
> aryeh.friedman@gmail.com>
> >> >  =C3=A9crivait :
> >> >
> >> > > Running without GPU (for now) on a bhyve vm (3 CPU, 2 GB RAM and 1=
00
> >> > > GB of disk) which I intend for determining if it is worse going ou=
t
> >> > > and getting the hardware to do GPU.   The problem I had was gettin=
g
> >> > > pytorch to work since it appears I have to build it from source an=
d
> it
> >> > > blows up in that build.
> >> >
> >> > Have you seen
> >> > <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D269739>; ?
> >>
> >> This seems to be true for all OS's I guess I will have to find an
> >> intel machine... this is as bad as the motivation that led me to do
> >> petitecloud in the first place (openstack not running on AMD period).
> >>  Is there just no way to run a ANN in pytorch data format in any other
> >> way that is not python (like Java?!!?) note the tensorflow port
> >> required pytorch
> >>
> >>
> >> --
> >> Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
> >>
>
>
> --
> Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
>


--=20
Mario.

--0000000000009e07eb05f9d5fdf2
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><pre class=3D"gmail-_3GnarIQX9tD_qsgXkfSDz1"><code class=
=3D"gmail-_34q3PgLsx9zIU5BiSOjFoM">Can&#39;t you install pytorch using the =
linux miniconda installer like below ? <br><br># fetch <a href=3D"https://g=
ist.githubusercontent.com/shkhln/40ef290463e78fb2b0000c60f4ad797e/raw/f6409=
83249607e38af405c95c457ce4afc85c608/uvm_ioctl_override.c">https://gist.gith=
ubusercontent.com/shkhln/40ef290463e78fb2b0000c60f4ad797e/raw/f640983249607=
e38af405c95c457ce4afc85c608/uvm_ioctl_override.c</a>

# /compat/ubuntu/bin/gcc --sysroot=3D/compat/ubuntu -m64 -std=3Dc99 -Wall -=
ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c

# pkg install linux-miniconda-installer
# miniconda-installer
# bash
# source /home/marietto/miniconda3/etc/profile.d/conda.sh
# conda activate

(base) # conda activate pytorch
</code></pre></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D=
"gmail_attr">On Fri, Apr 21, 2023 at 2:38=E2=80=AFAM Aryeh Friedman &lt;<a =
href=3D"mailto:aryeh.friedman@gmail.com">aryeh.friedman@gmail.com</a>&gt; w=
rote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0p=
x 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Thu, Ap=
r 20, 2023 at 12:24=E2=80=AFPM Mario Marietto &lt;<a href=3D"mailto:mariett=
o2008@gmail.com" target=3D"_blank">marietto2008@gmail.com</a>&gt; wrote:<br=
>
&gt;<br>
&gt; try to copy and paste the commands that you have issued on pastebin...=
i need to understand the scenario<br>
<br>
After saving the patch from the bug report to PORT/files and running<br>
portmaster -P misc/pytourch (brand new machine except for=C2=A0 installing<=
br>
portmaster):<br>
<br>
c/ATen/UfuncCPUKernel_add.cpp.AVX2.cpp.o -c<br>
/usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp.AV=
X2.cpp<br>
In file included from<br>
/usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp.AV=
X2.cpp:1:<br>
In file included from<br>
/usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp:3:=
<br>
In file included from<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/native/ufunc/add=
.h:6:<br>
In file included from<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/function=
al.h:3:<br>
In file included from<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/function=
al_base.h:6:<br>
In file included from<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec.h:6:=
<br>
In file included from<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256.h:12:<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:253:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_acosf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:256:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_asinf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:259:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_atanf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:280:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_erff8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:283:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_erfcf8_u15);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:300:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_expf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:303:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_expm1f8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~~<br=
>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:393:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_logf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:396:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_log2f8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:399:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_log10f8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~~<br=
>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:402:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_log1pf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~~<br=
>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:406:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_sinf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:409:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_sinhf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:412:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_cosf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:415:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_coshf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:447:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_tanf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:450:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_tanhf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:460:16:<br>
error: cannot initialize a parameter of type &#39;const __m256<br>
(*)(__m256)&#39; with an lvalue of type &#39;__m256 (__m256)&#39;: differen=
t<br>
return type (&#39;const __m256&#39; (vector of 8 &#39;float&#39; values) vs=
 &#39;__m256&#39;<br>
(vector of 8 &#39;float&#39; values))<br>
=C2=A0 =C2=A0 return map(Sleef_lgammaf8_u10);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~~~<b=
r>
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v=
ec256_bfloat16.h:209:49:<br>
note: passing argument to parameter &#39;vop&#39; here<br>
=C2=A0 Vectorized&lt;BFloat16&gt; map(const __m256 (*const vop)(__m256)) co=
nst {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 ^<br>
18 errors generated.<br>
[ 80% 1035/1283] /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS<br>
-DCPUINFO_SUPPORTED_PLATFORM=3D0 -DFMT_HEADER_ONLY=3D1<br>
-DHAVE_MALLOC_USABLE_SIZE=3D1 -DHAVE_MMAP=3D1 -DHAVE_SHM_OPEN=3D1<br>
-DHAVE_SHM_UNLINK=3D1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS<br>
-DONNXIFI_ENABLE_EXT=3D1 -DONNX_ML=3D1 -DONNX_NAMESPACE=3Donnx<br>
-DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=3D64 -Dtorch_cpu_EXPORTS<br>
-I/usr/ports/misc/pytorch/work/.build/aten/src<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src<br>
-I/usr/ports/misc/pytorch/work/.build<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/foxi<br>
-I/usr/ports/misc/pytorch/work/.build/third_party/foxi<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/include<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2/aten/src/TH<br>
-I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src/TH<br>
-I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src<br>
-I/usr/ports/misc/pytorch/work/.build/caffe2/../aten/src<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/miniz-2.1.0<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto=
/include<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto=
/src<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/../third_party/catch/si=
ngle_include<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/..<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/c10/..<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/cpuinfo/include<=
br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/FP16/include<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/fmt/include<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/flatbuffers/incl=
ude<br>
-isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/cmake/../third_party/=
eigen<br>
-isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2 -O2 -pipe<br>
-fstack-protector-strong -isystem /usr/local/include<br>
-fno-strict-aliasing=C2=A0 -isystem /usr/local/include -Wno-deprecated<br>
-fvisibility-inlines-hidden -fopenmp=3Dlibomp -DNDEBUG -DUSE_KINETO<br>
-DLIBKINETO_NOCUPTI -DSYMBOLICATE_MOBILE_DEBUG_HANDLE<br>
-DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra<br>
-Werror=3Dreturn-type -Werror=3Dnon-virtual-dtor<br>
-Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds<br>
-Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter<br>
-Wno-unused-function -Wno-unused-result -Wno-strict-overflow<br>
-Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations<br>
-Wvla-extension -Wno-range-loop-analysis -Wno-pass-failed<br>
-Wno-error=3Dpedantic -Wno-error=3Dredundant-decls<br>
-Wno-error=3Dold-style-cast -Wconstant-conversion<br>
-Wno-invalid-partial-specialization -Wno-typedef-redefinition<br>
-Wno-unused-private-field -Wno-inconsistent-missing-override<br>
-Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces<br>
-Wunused-lambda-capture -Wunused-local-typedef -Qunused-arguments<br>
-fcolor-diagnostics -fdiagnostics-color=3Dalways<br>
-Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math<br>
-Werror=3Dformat -Werror=3Dcast-function-type -DHAVE_AVX512_CPU_DEFINITION<=
br>
-DHAVE_AVX2_CPU_DEFINITION -O2 -pipe -fstack-protector-strong -isystem<br>
/usr/local/include -fno-strict-aliasing=C2=A0 -isystem /usr/local/include<b=
r>
-DNDEBUG -DNDEBUG -std=3Dgnu++14 -fPIC -DTH_HAVE_THREAD -Wall -Wextra<br>
-Wno-unused-parameter -Wno-unused-function -Wno-unused-result<br>
-Wno-missing-field-initializers -Wno-write-strings<br>
-Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds<br>
-Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing<br>
-Wno-error=3Ddeprecated-declarations -Wno-missing-braces<br>
-Wno-range-loop-analysis -fvisibility=3Dhidden -O2 -fopenmp=3Dlibomp<br>
-DCAFFE2_BUILD_MAIN_LIB -pthread -MD -MT<br>
caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-archi=
ve.cpp.o<br>
-MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-a=
rchive.cpp.o.d<br>
-o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-ar=
chive.cpp.o<br>
-c /usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/src/serializ=
e/input-archive.cpp<br>
[ 80% 1035/1283] /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS<br>
-DCPUINFO_SUPPORTED_PLATFORM=3D0 -DFMT_HEADER_ONLY=3D1<br>
-DHAVE_MALLOC_USABLE_SIZE=3D1 -DHAVE_MMAP=3D1 -DHAVE_SHM_OPEN=3D1<br>
-DHAVE_SHM_UNLINK=3D1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS<br>
-DONNXIFI_ENABLE_EXT=3D1 -DONNX_ML=3D1 -DONNX_NAMESPACE=3Donnx<br>
-DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=3D64 -Dtorch_cpu_EXPORTS<br>
-I/usr/ports/misc/pytorch/work/.build/aten/src<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src<br>
-I/usr/ports/misc/pytorch/work/.build<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/foxi<br>
-I/usr/ports/misc/pytorch/work/.build/third_party/foxi<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/include<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2/aten/src/TH<br>
-I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src/TH<br>
-I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src<br>
-I/usr/ports/misc/pytorch/work/.build/caffe2/../aten/src<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/miniz-2.1.0<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto=
/include<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto=
/src<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/../third_party/catch/si=
ngle_include<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/..<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/c10/..<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/cpuinfo/include<=
br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/FP16/include<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/fmt/include<br>
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/flatbuffers/incl=
ude<br>
-isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/cmake/../third_party/=
eigen<br>
-isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2 -O2 -pipe<br>
-fstack-protector-strong -isystem /usr/local/include<br>
-fno-strict-aliasing=C2=A0 -isystem /usr/local/include -Wno-deprecated<br>
-fvisibility-inlines-hidden -fopenmp=3Dlibomp -DNDEBUG -DUSE_KINETO<br>
-DLIBKINETO_NOCUPTI -DSYMBOLICATE_MOBILE_DEBUG_HANDLE<br>
-DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra<br>
-Werror=3Dreturn-type -Werror=3Dnon-virtual-dtor<br>
-Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds<br>
-Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter<br>
-Wno-unused-function -Wno-unused-result -Wno-strict-overflow<br>
-Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations<br>
-Wvla-extension -Wno-range-loop-analysis -Wno-pass-failed<br>
-Wno-error=3Dpedantic -Wno-error=3Dredundant-decls<br>
-Wno-error=3Dold-style-cast -Wconstant-conversion<br>
-Wno-invalid-partial-specialization -Wno-typedef-redefinition<br>
-Wno-unused-private-field -Wno-inconsistent-missing-override<br>
-Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces<br>
-Wunused-lambda-capture -Wunused-local-typedef -Qunused-arguments<br>
-fcolor-diagnostics -fdiagnostics-color=3Dalways<br>
-Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math<br>
-Werror=3Dformat -Werror=3Dcast-function-type -DHAVE_AVX512_CPU_DEFINITION<=
br>
-DHAVE_AVX2_CPU_DEFINITION -O2 -pipe -fstack-protector-strong -isystem<br>
/usr/local/include -fno-strict-aliasing=C2=A0 -isystem /usr/local/include<b=
r>
-DNDEBUG -DNDEBUG -std=3Dgnu++14 -fPIC -DTH_HAVE_THREAD -Wall -Wextra<br>
-Wno-unused-parameter -Wno-unused-function -Wno-unused-result<br>
-Wno-missing-field-initializers -Wno-write-strings<br>
-Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds<br>
-Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing<br>
-Wno-error=3Ddeprecated-declarations -Wno-missing-braces<br>
-Wno-range-loop-analysis -fvisibility=3Dhidden -O2 -fopenmp=3Dlibomp<br>
-DCAFFE2_BUILD_MAIN_LIB -pthread -MD -MT<br>
caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-arch=
ive.cpp.o<br>
-MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-=
archive.cpp.o.d<br>
-o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-a=
rchive.cpp.o<br>
-c /usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/src/serializ=
e/output-archive.cpp<br>
ninja: build stopped: subcommand failed.<br>
=3D=3D=3D&gt; Compilation failed unexpectedly.<br>
Try to set MAKE_JOBS_UNSAFE=3Dyes and rebuild before reporting the failure =
to<br>
the maintainer.<br>
*** Error code 1<br>
<br>
Stop.<br>
make: stopped in /usr/ports/misc/pytorch<br>
<br>
&gt;<br>
&gt; Il gio 20 apr 2023, 17:51 Aryeh Friedman &lt;<a href=3D"mailto:aryeh.f=
riedman@gmail.com" target=3D"_blank">aryeh.friedman@gmail.com</a>&gt; ha sc=
ritto:<br>
&gt;&gt;<br>
&gt;&gt; On Thu, Apr 20, 2023 at 7:52=E2=80=AFAM Thierry Thomas &lt;<a href=
=3D"mailto:thierry@freebsd.org" target=3D"_blank">thierry@freebsd.org</a>&g=
t; wrote:<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Le jeu. 20 avr. 23 =C3=A0 12:53:05 +0200, Aryeh Friedman &lt;=
<a href=3D"mailto:aryeh.friedman@gmail.com" target=3D"_blank">aryeh.friedma=
n@gmail.com</a>&gt;<br>
&gt;&gt; &gt;=C2=A0 =C3=A9crivait :<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; &gt; Running without GPU (for now) on a bhyve vm (3 CPU, 2 GB=
 RAM and 100<br>
&gt;&gt; &gt; &gt; GB of disk) which I intend for determining if it is wors=
e going out<br>
&gt;&gt; &gt; &gt; and getting the hardware to do GPU.=C2=A0 =C2=A0The prob=
lem I had was getting<br>
&gt;&gt; &gt; &gt; pytorch to work since it appears I have to build it from=
 source and it<br>
&gt;&gt; &gt; &gt; blows up in that build.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Have you seen<br>
&gt;&gt; &gt; &lt;<a href=3D"https://bugs.freebsd.org/bugzilla/show_bug.cgi=
?id=3D269739" rel=3D"noreferrer" target=3D"_blank">https://bugs.freebsd.org=
/bugzilla/show_bug.cgi?id=3D269739</a>&gt; ?<br>
&gt;&gt;<br>
&gt;&gt; This seems to be true for all OS&#39;s I guess I will have to find=
 an<br>
&gt;&gt; intel machine... this is as bad as the motivation that led me to d=
o<br>
&gt;&gt; petitecloud in the first place (openstack not running on AMD perio=
d).<br>
&gt;&gt;=C2=A0 Is there just no way to run a ANN in pytorch data format in =
any other<br>
&gt;&gt; way that is not python (like Java?!!?) note the tensorflow port<br=
>
&gt;&gt; required pytorch<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; --<br>
&gt;&gt; Aryeh M. Friedman, Lead Developer, <a href=3D"http://www.PetiteClo=
ud.org" rel=3D"noreferrer" target=3D"_blank">http://www.PetiteCloud.org</a>=
<br>
&gt;&gt;<br>
<br>
<br>
-- <br>
Aryeh M. Friedman, Lead Developer, <a href=3D"http://www.PetiteCloud.org" r=
el=3D"noreferrer" target=3D"_blank">http://www.PetiteCloud.org</a><br>;
</blockquote></div><br clear=3D"all"><br><span class=3D"gmail_signature_pre=
fix">-- </span><br><div dir=3D"ltr" class=3D"gmail_signature">Mario.<br></d=
iv>

--0000000000009e07eb05f9d5fdf2--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2B1FSijsSSpCFeKeaOt4gR36BAZ4J8j4QSJRJa-VF-a=J9e2uw>