Index | index by Group | index by Distribution | index by Vendor | index by creation date | index by Name | Mirrors | Help | Search |
Name: llama-cpp-devel | Distribution: Fedora Project |
Version: b4094 | Vendor: Fedora Project |
Release: 9.fc42 | Build date: Fri Jan 17 16:53:49 2025 |
Group: Unspecified | Build host: buildhw-x86-14.iad2.fedoraproject.org |
Size: 900629684 | Source RPM: llama-cpp-b4094-9.fc42.src.rpm |
Packager: Fedora Project | |
Url: https://github.com/ggerganov/llama.cpp | |
Summary: Port of Facebook's LLaMA model in C/C++ |
The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization on a MacBook * Plain C/C++ implementation without dependencies * Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks * AVX, AVX2 and AVX512 support for x86 architectures * Mixed F16 / F32 precision * 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support * CUDA, Metal and OpenCL GPU backend support The original implementation of llama.cpp was hacked in an evening. Since then, the project has improved significantly thanks to many contributions. This project is mainly for educational purposes and serves as the main playground for developing new features for the ggml library.
MIT AND Apache-2.0 AND LicenseRef-Fedora-Public-Domain
* Fri Jan 17 2025 Fedora Release Engineering <releng@fedoraproject.org> - b4094-9 - Rebuilt for https://fedoraproject.org/wiki/Fedora_42_Mass_Rebuild * Tue Dec 03 2024 Debarshi Ray <rishi@fedoraproject.org> - b4094-8 - Enable the GPU accelerated HIP code * Tue Dec 03 2024 Debarshi Ray <rishi@fedoraproject.org> - b4094-7 - Fix the build options to use AVX, FMA and F16C instructions * Tue Dec 03 2024 Debarshi Ray <rishi@fedoraproject.org> - b4094-6 - Restore OpenMP support on x86_64 * Thu Nov 28 2024 Tom Rix <Tom.Rix@amd.com> - b4094-5 - Remove git from build requires * Wed Nov 27 2024 Debarshi Ray <rishi@fedoraproject.org> - b4094-4 - Remove misspelt and unused build options * Wed Nov 27 2024 Debarshi Ray <rishi@fedoraproject.org> - b4094-3 - Silence mixed-use-of-spaces-and-tabs * Tue Nov 26 2024 Tomas Tomecek <ttomecek@redhat.com> - b4094-2 - run upstream tests in Fedora CI * Mon Nov 25 2024 Tomas Tomecek <ttomecek@redhat.com> - b4094-1 - Update to b4094 * Fri Oct 18 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b3837-4 - updated dependencies and fixed rocm Issues in rawhide, and, f40, (f39 doesn't have relevant dependencies) * Fri Oct 11 2024 Tom Rix <Tom.Rix@amd.com> - b3837-3 - Add ROCm backend * Thu Oct 10 2024 Tom Rix <Tom.Rix@amd.com> - b3837-2 - ccache is not available on RHEL. * Sat Sep 28 2024 Tom Rix <Tom.Rix@amd.com> - b3837-1 - Update to b3837 * Wed Sep 04 2024 Tom Rix <Tom.Rix@amd.com> - b3667-1 - Update to b3667 * Thu Jul 18 2024 Fedora Release Engineering <releng@fedoraproject.org> - b3184-4 - Rebuilt for https://fedoraproject.org/wiki/Fedora_41_Mass_Rebuild * Sat Jun 22 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b3184-3 - added changelog * Sat Jun 22 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b3184-2 - added .pc file * Sat Jun 22 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b3184-1 - upgraded to b3184 which is used by llama-cpp-python v0.2.79 * Tue May 21 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2879-7 - removed old file names .gitignore * Sun May 19 2024 Tom Rix <trix@redhat.com> - b2879-6 - Remove old sources * Sun May 19 2024 Tom Rix <trix@redhat.com> - b2879-5 - Include missing sources * Sat May 18 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2879-4 - added build dependencies and added changelog * Sat May 18 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2879-3 - added aditional source * Fri May 17 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2879-2 - updated * Fri May 17 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2879-1 - updated and fix build bugs * Mon May 13 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2861-7 - removed source 1 * Mon May 13 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2861-6 - added llama.cpp-b2861.tar.gz to .gitignore * Mon May 13 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2861-5 - fixed source 1 url * Mon May 13 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2861-4 - added tag release as source 1 * Mon May 13 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2861-3 - fix source hash * Sun May 12 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2861-2 - fix mistake mistake in version * Sun May 12 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2861-1 - update b2861 * Sun May 12 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2860-2 - added changelog * Sun May 12 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2860-1 - bump version to b2860 * Sun May 12 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2619-5 - upgrade to b2860 tag * Sun May 12 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2619-4 - added ccache build dependency because LLAMA_CCACHE=ON on by default * Sun May 12 2024 Mohammadreza Hendiani <man2dev@fedoraproject.org> - b2619-3 - added numactl as Weak dependency * Thu Apr 11 2024 Tom Rix <trix@redhat.com> - b2619-2 - New sources * Thu Apr 11 2024 Tomas Tomecek <ttomecek@redhat.com> - b2619-1 - Update to b2619 (required by llama-cpp-python-0.2.60) * Sat Mar 23 2024 Tom Rix <trix@redhat.com> - b2417-2 - Fix test subpackage * Sat Mar 23 2024 Tom Rix <trix@redhat.com> - b2417-1 - Initial package
/usr/include/ggml-alloc.h /usr/include/ggml-backend.h /usr/include/ggml-blas.h /usr/include/ggml-cann.h /usr/include/ggml-cpu.h /usr/include/ggml-cuda.h /usr/include/ggml-kompute.h /usr/include/ggml-metal.h /usr/include/ggml-rpc.h /usr/include/ggml-sycl.h /usr/include/ggml-vulkan.h /usr/include/ggml.h /usr/include/llama.h /usr/lib/.build-id /usr/lib/.build-id/a9 /usr/lib/.build-id/a9/402f56cb53e71ba36d4090efc7d39a5ba0ea2f /usr/lib/.build-id/ad /usr/lib/.build-id/ad/b2edf21db57f50f92641ff6f3b960510491bb0 /usr/lib/pkgconfig/llama.pc /usr/lib64/cmake/llama /usr/lib64/cmake/llama/llama-config.cmake /usr/lib64/cmake/llama/llama-version.cmake /usr/lib64/libggml-base.so /usr/lib64/libggml-cpu.so /usr/lib64/libggml-hip.so /usr/lib64/libggml.so /usr/lib64/libllama.so /usr/share/doc/llama-cpp-devel /usr/share/doc/llama-cpp-devel/README.md
Generated by rpm2html 1.8.1
Fabrice Bellet, Wed Jan 29 03:15:45 2025