Index | index by Group | index by Distribution | index by Vendor | index by creation date | index by Name | Mirrors | Help | Search |
Name: llama-cpp-devel | Distribution: Fedora Project |
Version: b2619 | Vendor: Fedora Project |
Release: 1.fc41 | Build date: Thu Apr 11 14:10:32 2024 |
Group: Unspecified | Build host: buildhw-x86-08.iad2.fedoraproject.org |
Size: 218091 | Source RPM: llama-cpp-b2619-1.fc41.src.rpm |
Packager: Fedora Project | |
Url: https://github.com/ggerganov/llama.cpp | |
Summary: Port of Facebook's LLaMA model in C/C++ |
The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization on a MacBook * Plain C/C++ implementation without dependencies * Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks * AVX, AVX2 and AVX512 support for x86 architectures * Mixed F16 / F32 precision * 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support * CUDA, Metal and OpenCL GPU backend support The original implementation of llama.cpp was hacked in an evening. Since then, the project has improved significantly thanks to many contributions. This project is mainly for educational purposes and serves as the main playground for developing new features for the ggml library.
MIT AND Apache-2.0 AND LicenseRef-Fedora-Public-Domain
* Thu Apr 11 2024 Tomas Tomecek <ttomecek@redhat.com> - b2619-1 - Update to b2619 (required by llama-cpp-python-0.2.60) * Sat Mar 23 2024 Tom Rix <trix@redhat.com> - b2417-2 - Fix test subpackage * Thu Mar 14 2024 Tom Rix <trix@redhat.com> - b2417-1 - Update to b2417 * Sat Dec 23 2023 Tom Rix <trix@redhat.com> - b1695-1 - Initial package
/usr/include/ggml-alloc.h /usr/include/ggml-backend.h /usr/include/ggml.h /usr/include/llama.h /usr/lib64/cmake/Llama /usr/lib64/cmake/Llama/LlamaConfig.cmake /usr/lib64/cmake/Llama/LlamaConfigVersion.cmake /usr/lib64/libllama.so /usr/share/doc/llama-cpp-devel /usr/share/doc/llama-cpp-devel/README.md
Generated by rpm2html 1.8.1
Fabrice Bellet, Thu May 9 07:43:34 2024