Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 8 Dec 2024 11:08:56 GMT
From:      Yuri Victorovich <yuri@FreeBSD.org>
To:        ports-committers@FreeBSD.org, dev-commits-ports-all@FreeBSD.org, dev-commits-ports-main@FreeBSD.org
Subject:   git: aa5c8c811c54 - main - misc/llama-cpp: update 4120 =?utf-8?Q?=E2=86=92?= 4285
Message-ID:  <202412081108.4B8B8uYh061600@gitrepo.freebsd.org>

next in thread | raw e-mail | index | archive | help
The branch main has been updated by yuri:

URL: https://cgit.FreeBSD.org/ports/commit/?id=aa5c8c811c5434f57375c011e8757fefa5fc1d98

commit aa5c8c811c5434f57375c011e8757fefa5fc1d98
Author:     Yuri Victorovich <yuri@FreeBSD.org>
AuthorDate: 2024-12-08 07:22:06 +0000
Commit:     Yuri Victorovich <yuri@FreeBSD.org>
CommitDate: 2024-12-08 11:08:54 +0000

    misc/llama-cpp: update 4120 → 4285
---
 misc/llama-cpp/Makefile  |  3 ++-
 misc/llama-cpp/distinfo  |  8 +++++---
 misc/llama-cpp/pkg-plist | 21 ++++++++++++---------
 3 files changed, 19 insertions(+), 13 deletions(-)

diff --git a/misc/llama-cpp/Makefile b/misc/llama-cpp/Makefile
index db42076ad4a8..102c3cf2498e 100644
--- a/misc/llama-cpp/Makefile
+++ b/misc/llama-cpp/Makefile
@@ -1,10 +1,11 @@
 PORTNAME=	llama-cpp
 DISTVERSIONPREFIX=	b
-DISTVERSION=	4120
+DISTVERSION=	4285
 CATEGORIES=	misc # machine-learning
 
 PATCH_SITES=	https://github.com/${GH_ACCOUNT}/${GH_PROJECT}/commit/
 PATCHFILES=	121f915a09c1117d34aff6e8faf6d252aaf11027.patch:-p1 # Add missing pthread includes: https://github.com/ggerganov/llama.cpp/pull/9258
+PATCHFILES+=	723342566862305c6707bf90b69f86625cf26620.patch:-p1 # prevent build freezing, https://github.com/ggerganov/llama.cpp/pull/10713
 
 MAINTAINER=	yuri@FreeBSD.org
 COMMENT=	Facebook's LLaMA model in C/C++ # '
diff --git a/misc/llama-cpp/distinfo b/misc/llama-cpp/distinfo
index e8afef3b44f2..6dfb0c90f552 100644
--- a/misc/llama-cpp/distinfo
+++ b/misc/llama-cpp/distinfo
@@ -1,7 +1,9 @@
-TIMESTAMP = 1731907679
-SHA256 (ggerganov-llama.cpp-b4120_GH0.tar.gz) = ff1e6cde07e3f2a587978ea58d54bece296b61055b500898f702d8fbeff52e73
-SIZE (ggerganov-llama.cpp-b4120_GH0.tar.gz) = 19557501
+TIMESTAMP = 1733639498
+SHA256 (ggerganov-llama.cpp-b4285_GH0.tar.gz) = 5ddcac4db11002f50940b241b289e5bfca81c5032332427c622819dccf61e717
+SIZE (ggerganov-llama.cpp-b4285_GH0.tar.gz) = 19423062
 SHA256 (nomic-ai-kompute-4565194_GH0.tar.gz) = 95b52d2f0514c5201c7838348a9c3c9e60902ea3c6c9aa862193a212150b2bfc
 SIZE (nomic-ai-kompute-4565194_GH0.tar.gz) = 13540496
 SHA256 (121f915a09c1117d34aff6e8faf6d252aaf11027.patch) = 9a0c47ae3cb7dd51b6ce19187dafd48578210f69558f7c8044ee480471f1fd33
 SIZE (121f915a09c1117d34aff6e8faf6d252aaf11027.patch) = 591
+SHA256 (723342566862305c6707bf90b69f86625cf26620.patch) = 20074686fe70eb702528fdeb26b98b2ea81589529365cacf34edb21482040f70
+SIZE (723342566862305c6707bf90b69f86625cf26620.patch) = 7101
diff --git a/misc/llama-cpp/pkg-plist b/misc/llama-cpp/pkg-plist
index dd687531ab97..bf60b248b492 100644
--- a/misc/llama-cpp/pkg-plist
+++ b/misc/llama-cpp/pkg-plist
@@ -1,12 +1,11 @@
 bin/convert_hf_to_gguf.py
-bin/llama-batched
-bin/llama-batched-bench
-bin/llama-bench
-bin/llama-cli
-bin/llama-convert-llama2c-to-ggml
-bin/llama-cvector-generator
-bin/llama-embedding
-bin/llama-simple-chat
+%%EXAMPLES%%bin/llama-batched
+%%EXAMPLES%%bin/llama-batched-bench
+%%EXAMPLES%%bin/llama-bench
+%%EXAMPLES%%bin/llama-cli
+%%EXAMPLES%%bin/llama-convert-llama2c-to-ggml
+%%EXAMPLES%%bin/llama-cvector-generator
+%%EXAMPLES%%bin/llama-embedding
 %%EXAMPLES%%bin/llama-eval-callback
 %%EXAMPLES%%bin/llama-export-lora
 %%EXAMPLES%%bin/llama-gbnf-validator
@@ -29,10 +28,13 @@ bin/llama-simple-chat
 %%EXAMPLES%%bin/llama-quantize
 %%EXAMPLES%%bin/llama-quantize-stats
 %%EXAMPLES%%bin/llama-retrieval
+%%EXAMPLES%%bin/llama-run
 %%EXAMPLES%%bin/llama-save-load-state
 %%EXAMPLES%%bin/llama-server
 %%EXAMPLES%%bin/llama-simple
+%%EXAMPLES%%bin/llama-simple-chat
 %%EXAMPLES%%bin/llama-speculative
+%%EXAMPLES%%bin/llama-speculative-simple
 %%EXAMPLES%%bin/llama-tokenize
 %%VULKAN%%bin/vulkan-shaders-gen
 include/ggml-alloc.h
@@ -48,13 +50,14 @@ include/ggml-rpc.h
 include/ggml-sycl.h
 include/ggml-vulkan.h
 include/ggml.h
+include/llama-cpp.h
 include/llama.h
 lib/cmake/llama/llama-config.cmake
 lib/cmake/llama/llama-version.cmake
-lib/libggml.so
 lib/libggml-base.so
 lib/libggml-cpu.so
 lib/libggml-vulkan.so
+lib/libggml.so
 lib/libllama.so
 %%EXAMPLES%%lib/libllava_shared.so
 libdata/pkgconfig/llama.pc



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202412081108.4B8B8uYh061600>