Index | index by Group | index by Distribution | index by Vendor | index by creation date | index by Name | Mirrors | Help |
The search service can find package by either name (apache), provides(webserver), absolute file names (/usr/bin/apache), binaries (gprof) or shared libraries (libXm.so.2) in standard path. It does not support multiple arguments yet...
The System and Arch are optional added filters, for example System could be "redhat", "redhat-7.2", "mandrake" or "gnome", Arch could be "i386" or "src", etc. depending on your system.
The llama.cpp library provides a C++ interface for running inference with large language models (LLMs). Initially designed to support Meta's LLaMA model, it has since been extended to work with a variety of other models. This package includes the llama-cli tool to run inference using the library.
Package | Summary | Distribution | Download |
llamacpp-4501-1.1.aarch64.html | llama-cli tool to run inference using the llama.cpp library | OpenSuSE Ports Tumbleweed for aarch64 | llamacpp-4501-1.1.aarch64.rpm |
llamacpp-4501-1.1.s390x.html | llama-cli tool to run inference using the llama.cpp library | OpenSuSE Ports Tumbleweed for s390x | llamacpp-4501-1.1.s390x.rpm |
llamacpp-4501-1.1.x86_64.html | llama-cli tool to run inference using the llama.cpp library | OpenSuSE Tumbleweed for x86_64 | llamacpp-4501-1.1.x86_64.rpm |
Generated by rpm2html 1.6